Test Report: Docker_Linux_crio 22182

                    
                      d8910aedaf59f4b051fab9f3c680e262e7105014:2025-12-17:42820
                    
                

Test fail (28/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.26
44 TestAddons/parallel/Registry 14.69
45 TestAddons/parallel/RegistryCreds 0.45
46 TestAddons/parallel/Ingress 147.04
47 TestAddons/parallel/InspektorGadget 5.27
48 TestAddons/parallel/MetricsServer 5.32
50 TestAddons/parallel/CSI 24.51
51 TestAddons/parallel/Headlamp 2.6
52 TestAddons/parallel/CloudSpanner 5.26
53 TestAddons/parallel/LocalPath 10.14
54 TestAddons/parallel/NvidiaDevicePlugin 6.26
55 TestAddons/parallel/Yakd 6.26
56 TestAddons/parallel/AmdGpuDevicePlugin 5.26
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.55
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 5.14
294 TestJSONOutput/pause/Command 1.72
300 TestJSONOutput/unpause/Command 1.95
366 TestPause/serial/Pause 6.13
452 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.73
453 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.72
457 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.28
464 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.61
475 TestStartStop/group/embed-certs/serial/Pause 6.67
478 TestStartStop/group/old-k8s-version/serial/Pause 5.95
483 TestStartStop/group/no-preload/serial/Pause 6.95
487 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.88
489 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.34
496 TestStartStop/group/newest-cni/serial/Pause 5.61
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable volcano --alsologtostderr -v=1: exit status 11 (259.295143ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:51:54.000259  565805 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:51:54.000524  565805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:51:54.000549  565805 out.go:374] Setting ErrFile to fd 2...
	I1217 07:51:54.000553  565805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:51:54.000839  565805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:51:54.001119  565805 mustload.go:66] Loading cluster: addons-910958
	I1217 07:51:54.001439  565805 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:51:54.001453  565805 addons.go:622] checking whether the cluster is paused
	I1217 07:51:54.001542  565805 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:51:54.001561  565805 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:51:54.001968  565805 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:51:54.021928  565805 ssh_runner.go:195] Run: systemctl --version
	I1217 07:51:54.021991  565805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:51:54.041303  565805 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:51:54.134166  565805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:51:54.134341  565805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:51:54.164355  565805 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:51:54.164381  565805 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:51:54.164387  565805 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:51:54.164393  565805 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:51:54.164397  565805 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:51:54.164405  565805 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:51:54.164409  565805 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:51:54.164413  565805 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:51:54.164418  565805 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:51:54.164429  565805 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:51:54.164434  565805 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:51:54.164438  565805 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:51:54.164443  565805 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:51:54.164448  565805 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:51:54.164452  565805 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:51:54.164463  565805 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:51:54.164467  565805 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:51:54.164472  565805 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:51:54.164477  565805 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:51:54.164481  565805 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:51:54.164487  565805 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:51:54.164492  565805 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:51:54.164497  565805 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:51:54.164502  565805 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:51:54.164507  565805 cri.go:89] found id: ""
	I1217 07:51:54.164577  565805 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:51:54.179736  565805 out.go:203] 
	W1217 07:51:54.181297  565805 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:51:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:51:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:51:54.181345  565805 out.go:285] * 
	* 
	W1217 07:51:54.185758  565805 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:51:54.187501  565805 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.729243ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003394574s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003928316s
addons_test.go:394: (dbg) Run:  kubectl --context addons-910958 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-910958 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-910958 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.205193464s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 ip
2025/12/17 07:52:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable registry --alsologtostderr -v=1: exit status 11 (251.217336ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:17.523052  568424 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:17.523174  568424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:17.523189  568424 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:17.523196  568424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:17.523423  568424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:17.523791  568424 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:17.524188  568424 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:17.524211  568424 addons.go:622] checking whether the cluster is paused
	I1217 07:52:17.524315  568424 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:17.524338  568424 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:17.524852  568424 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:17.544608  568424 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:17.544678  568424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:17.563090  568424 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:17.655577  568424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:17.655702  568424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:17.685935  568424 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:17.685959  568424 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:17.685963  568424 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:17.685967  568424 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:17.685969  568424 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:17.685975  568424 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:17.685978  568424 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:17.685981  568424 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:17.685984  568424 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:17.685994  568424 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:17.685999  568424 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:17.686003  568424 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:17.686010  568424 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:17.686014  568424 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:17.686018  568424 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:17.686036  568424 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:17.686048  568424 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:17.686053  568424 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:17.686059  568424 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:17.686062  568424 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:17.686068  568424 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:17.686073  568424 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:17.686076  568424 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:17.686079  568424 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:17.686082  568424 cri.go:89] found id: ""
	I1217 07:52:17.686127  568424 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:17.701625  568424 out.go:203] 
	W1217 07:52:17.702868  568424 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:17.702903  568424 out.go:285] * 
	* 
	W1217 07:52:17.707059  568424 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:17.708595  568424 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.69s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.45s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.994511ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-910958
addons_test.go:334: (dbg) Run:  kubectl --context addons-910958 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (274.249411ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:21.655632  568886 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:21.655899  568886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:21.655909  568886 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:21.655913  568886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:21.656106  568886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:21.656403  568886 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:21.656770  568886 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:21.656788  568886 addons.go:622] checking whether the cluster is paused
	I1217 07:52:21.656876  568886 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:21.656889  568886 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:21.657242  568886 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:21.678886  568886 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:21.678949  568886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:21.700110  568886 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:21.800474  568886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:21.800597  568886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:21.833073  568886 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:21.833100  568886 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:21.833107  568886 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:21.833111  568886 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:21.833116  568886 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:21.833120  568886 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:21.833125  568886 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:21.833130  568886 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:21.833135  568886 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:21.833142  568886 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:21.833147  568886 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:21.833153  568886 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:21.833165  568886 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:21.833170  568886 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:21.833178  568886 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:21.833199  568886 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:21.833207  568886 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:21.833214  568886 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:21.833222  568886 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:21.833228  568886 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:21.833235  568886 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:21.833244  568886 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:21.833251  568886 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:21.833256  568886 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:21.833260  568886 cri.go:89] found id: ""
	I1217 07:52:21.833319  568886 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:21.849986  568886 out.go:203] 
	W1217 07:52:21.851567  568886 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:21.851594  568886 out.go:285] * 
	* 
	W1217 07:52:21.855884  568886 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:21.857616  568886 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.45s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-910958 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-910958 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-910958 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [e196e772-944c-4091-9755-316be396082f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [e196e772-944c-4091-9755-316be396082f] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004126867s
I1217 07:52:18.746274  556055 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.398462919s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-910958 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-910958
helpers_test.go:244: (dbg) docker inspect addons-910958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26",
	        "Created": "2025-12-17T07:50:30.93101818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 558558,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T07:50:30.968094981Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26/hostname",
	        "HostsPath": "/var/lib/docker/containers/baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26/hosts",
	        "LogPath": "/var/lib/docker/containers/baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26/baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26-json.log",
	        "Name": "/addons-910958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-910958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-910958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26",
	                "LowerDir": "/var/lib/docker/overlay2/c93f6daccb08def7a4c967da6a223fba8700890d0ad45732c65afaf1eec27ec3-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c93f6daccb08def7a4c967da6a223fba8700890d0ad45732c65afaf1eec27ec3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c93f6daccb08def7a4c967da6a223fba8700890d0ad45732c65afaf1eec27ec3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c93f6daccb08def7a4c967da6a223fba8700890d0ad45732c65afaf1eec27ec3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-910958",
	                "Source": "/var/lib/docker/volumes/addons-910958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-910958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-910958",
	                "name.minikube.sigs.k8s.io": "addons-910958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e92eb2edad02ba7482ca522886cda08a3b3d3d9a073dbda8d59f3204ce839efb",
	            "SandboxKey": "/var/run/docker/netns/e92eb2edad02",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-910958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37b37450991e7ebf3dd0772299b7ae7e43842e4360f2197f6db56f7931547f66",
	                    "EndpointID": "f792db5c7c829b09e4c3877a09c931775c6aba89629b611c029660ab679db13b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "02:c4:26:57:1f:39",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-910958",
	                        "baf2bab91de7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-910958 -n addons-910958
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-910958 logs -n 25: (1.235661465s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-777344 --alsologtostderr --binary-mirror http://127.0.0.1:43583 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-777344 │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │                     │
	│ delete  │ -p binary-mirror-777344                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-777344 │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │ 17 Dec 25 07:50 UTC │
	│ addons  │ disable dashboard -p addons-910958                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │                     │
	│ addons  │ enable dashboard -p addons-910958                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │                     │
	│ start   │ -p addons-910958 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │ 17 Dec 25 07:51 UTC │
	│ addons  │ addons-910958 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:51 UTC │                     │
	│ addons  │ addons-910958 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ enable headlamp -p addons-910958 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ addons-910958 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ addons-910958 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ addons-910958 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ addons-910958 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ addons-910958 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ addons-910958 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ ip      │ addons-910958 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │ 17 Dec 25 07:52 UTC │
	│ addons  │ addons-910958 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ ssh     │ addons-910958 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ addons-910958 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-910958                                                                                                                                                                                                                                                                                                                                                                                           │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │ 17 Dec 25 07:52 UTC │
	│ addons  │ addons-910958 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ ssh     │ addons-910958 ssh cat /opt/local-path-provisioner/pvc-fcb743b3-de9d-497c-9368-b22d621e1e69_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │ 17 Dec 25 07:52 UTC │
	│ addons  │ addons-910958 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ addons-910958 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ addons-910958 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ ip      │ addons-910958 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-910958        │ jenkins │ v1.37.0 │ 17 Dec 25 07:54 UTC │ 17 Dec 25 07:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 07:50:09
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 07:50:09.715842  557899 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:50:09.715967  557899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:50:09.715981  557899 out.go:374] Setting ErrFile to fd 2...
	I1217 07:50:09.715985  557899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:50:09.716164  557899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:50:09.716778  557899 out.go:368] Setting JSON to false
	I1217 07:50:09.717785  557899 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5555,"bootTime":1765952255,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 07:50:09.717862  557899 start.go:143] virtualization: kvm guest
	I1217 07:50:09.720336  557899 out.go:179] * [addons-910958] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 07:50:09.722077  557899 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 07:50:09.722093  557899 notify.go:221] Checking for updates...
	I1217 07:50:09.725438  557899 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 07:50:09.727265  557899 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 07:50:09.729365  557899 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 07:50:09.731073  557899 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 07:50:09.732951  557899 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 07:50:09.734973  557899 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 07:50:09.760335  557899 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 07:50:09.760454  557899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:50:09.820111  557899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-17 07:50:09.809514658 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:50:09.820219  557899 docker.go:319] overlay module found
	I1217 07:50:09.822350  557899 out.go:179] * Using the docker driver based on user configuration
	I1217 07:50:09.824318  557899 start.go:309] selected driver: docker
	I1217 07:50:09.824367  557899 start.go:927] validating driver "docker" against <nil>
	I1217 07:50:09.824381  557899 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 07:50:09.825045  557899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:50:09.888973  557899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-17 07:50:09.877771707 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:50:09.889135  557899 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 07:50:09.889356  557899 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 07:50:09.891413  557899 out.go:179] * Using Docker driver with root privileges
	I1217 07:50:09.892872  557899 cni.go:84] Creating CNI manager for ""
	I1217 07:50:09.892946  557899 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 07:50:09.892960  557899 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 07:50:09.893027  557899 start.go:353] cluster config:
	{Name:addons-910958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-910958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 07:50:09.894685  557899 out.go:179] * Starting "addons-910958" primary control-plane node in "addons-910958" cluster
	I1217 07:50:09.896084  557899 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 07:50:09.897591  557899 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 07:50:09.899016  557899 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 07:50:09.899058  557899 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 07:50:09.899077  557899 cache.go:65] Caching tarball of preloaded images
	I1217 07:50:09.899085  557899 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 07:50:09.899206  557899 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 07:50:09.899226  557899 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 07:50:09.899661  557899 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/config.json ...
	I1217 07:50:09.899697  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/config.json: {Name:mk796d49a15f21053d007d40367a1b2b7c23560b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:09.917554  557899 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 07:50:09.917781  557899 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 07:50:09.917806  557899 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 07:50:09.917812  557899 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 07:50:09.917821  557899 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 07:50:09.917828  557899 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1217 07:50:23.253947  557899 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1217 07:50:23.253984  557899 cache.go:243] Successfully downloaded all kic artifacts
	I1217 07:50:23.254026  557899 start.go:360] acquireMachinesLock for addons-910958: {Name:mkaedf734c4ba4da4503e198fef98048b1048577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 07:50:23.254132  557899 start.go:364] duration metric: took 87.03µs to acquireMachinesLock for "addons-910958"
	I1217 07:50:23.254158  557899 start.go:93] Provisioning new machine with config: &{Name:addons-910958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-910958 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 07:50:23.254237  557899 start.go:125] createHost starting for "" (driver="docker")
	I1217 07:50:23.256307  557899 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 07:50:23.256636  557899 start.go:159] libmachine.API.Create for "addons-910958" (driver="docker")
	I1217 07:50:23.256672  557899 client.go:173] LocalClient.Create starting
	I1217 07:50:23.256773  557899 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 07:50:23.379071  557899 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 07:50:23.455440  557899 cli_runner.go:164] Run: docker network inspect addons-910958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 07:50:23.473199  557899 cli_runner.go:211] docker network inspect addons-910958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 07:50:23.473421  557899 network_create.go:284] running [docker network inspect addons-910958] to gather additional debugging logs...
	I1217 07:50:23.473542  557899 cli_runner.go:164] Run: docker network inspect addons-910958
	W1217 07:50:23.491227  557899 cli_runner.go:211] docker network inspect addons-910958 returned with exit code 1
	I1217 07:50:23.491274  557899 network_create.go:287] error running [docker network inspect addons-910958]: docker network inspect addons-910958: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-910958 not found
	I1217 07:50:23.491289  557899 network_create.go:289] output of [docker network inspect addons-910958]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-910958 not found
	
	** /stderr **
	I1217 07:50:23.491414  557899 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 07:50:23.510197  557899 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002072900}
	I1217 07:50:23.510260  557899 network_create.go:124] attempt to create docker network addons-910958 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 07:50:23.510357  557899 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-910958 addons-910958
	I1217 07:50:23.561243  557899 network_create.go:108] docker network addons-910958 192.168.49.0/24 created
	I1217 07:50:23.561289  557899 kic.go:121] calculated static IP "192.168.49.2" for the "addons-910958" container
	I1217 07:50:23.561378  557899 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 07:50:23.578783  557899 cli_runner.go:164] Run: docker volume create addons-910958 --label name.minikube.sigs.k8s.io=addons-910958 --label created_by.minikube.sigs.k8s.io=true
	I1217 07:50:23.597215  557899 oci.go:103] Successfully created a docker volume addons-910958
	I1217 07:50:23.597318  557899 cli_runner.go:164] Run: docker run --rm --name addons-910958-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-910958 --entrypoint /usr/bin/test -v addons-910958:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 07:50:27.015458  557899 cli_runner.go:217] Completed: docker run --rm --name addons-910958-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-910958 --entrypoint /usr/bin/test -v addons-910958:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (3.418095293s)
	I1217 07:50:27.015490  557899 oci.go:107] Successfully prepared a docker volume addons-910958
	I1217 07:50:27.015519  557899 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 07:50:27.015528  557899 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 07:50:27.015617  557899 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-910958:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 07:50:30.857629  557899 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-910958:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.841964972s)
	I1217 07:50:30.857675  557899 kic.go:203] duration metric: took 3.842142462s to extract preloaded images to volume ...
	W1217 07:50:30.857776  557899 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 07:50:30.857817  557899 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 07:50:30.857872  557899 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 07:50:30.914826  557899 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-910958 --name addons-910958 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-910958 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-910958 --network addons-910958 --ip 192.168.49.2 --volume addons-910958:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 07:50:31.186277  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Running}}
	I1217 07:50:31.206510  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:31.228029  557899 cli_runner.go:164] Run: docker exec addons-910958 stat /var/lib/dpkg/alternatives/iptables
	I1217 07:50:31.274171  557899 oci.go:144] the created container "addons-910958" has a running status.
	I1217 07:50:31.274229  557899 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519...
	I1217 07:50:31.275771  557899 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 07:50:31.302708  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:31.322985  557899 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 07:50:31.323009  557899 kic_runner.go:114] Args: [docker exec --privileged addons-910958 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 07:50:31.372037  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:31.390404  557899 machine.go:94] provisionDockerMachine start ...
	I1217 07:50:31.390518  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:31.409129  557899 main.go:143] libmachine: Using SSH client type: native
	I1217 07:50:31.409298  557899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33170 <nil> <nil>}
	I1217 07:50:31.409316  557899 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 07:50:31.410099  557899 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55668->127.0.0.1:33170: read: connection reset by peer
	I1217 07:50:34.541086  557899 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-910958
	
	I1217 07:50:34.541120  557899 ubuntu.go:182] provisioning hostname "addons-910958"
	I1217 07:50:34.541279  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:34.561723  557899 main.go:143] libmachine: Using SSH client type: native
	I1217 07:50:34.561828  557899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33170 <nil> <nil>}
	I1217 07:50:34.561840  557899 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-910958 && echo "addons-910958" | sudo tee /etc/hostname
	I1217 07:50:34.698669  557899 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-910958
	
	I1217 07:50:34.698764  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:34.717936  557899 main.go:143] libmachine: Using SSH client type: native
	I1217 07:50:34.718037  557899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33170 <nil> <nil>}
	I1217 07:50:34.718056  557899 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-910958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-910958/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-910958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 07:50:34.849853  557899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 07:50:34.849887  557899 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 07:50:34.849918  557899 ubuntu.go:190] setting up certificates
	I1217 07:50:34.849936  557899 provision.go:84] configureAuth start
	I1217 07:50:34.850006  557899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-910958
	I1217 07:50:34.869138  557899 provision.go:143] copyHostCerts
	I1217 07:50:34.869237  557899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 07:50:34.869383  557899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 07:50:34.869487  557899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 07:50:34.869587  557899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.addons-910958 san=[127.0.0.1 192.168.49.2 addons-910958 localhost minikube]
	I1217 07:50:35.062792  557899 provision.go:177] copyRemoteCerts
	I1217 07:50:35.062870  557899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 07:50:35.062925  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.081341  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:35.175642  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 07:50:35.196282  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 07:50:35.214861  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 07:50:35.233104  557899 provision.go:87] duration metric: took 383.146847ms to configureAuth
	I1217 07:50:35.233137  557899 ubuntu.go:206] setting minikube options for container-runtime
	I1217 07:50:35.233331  557899 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:50:35.233451  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.251636  557899 main.go:143] libmachine: Using SSH client type: native
	I1217 07:50:35.251771  557899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33170 <nil> <nil>}
	I1217 07:50:35.251794  557899 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 07:50:35.532651  557899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 07:50:35.532678  557899 machine.go:97] duration metric: took 4.142236727s to provisionDockerMachine
	I1217 07:50:35.532691  557899 client.go:176] duration metric: took 12.276010318s to LocalClient.Create
	I1217 07:50:35.532717  557899 start.go:167] duration metric: took 12.276081977s to libmachine.API.Create "addons-910958"
	I1217 07:50:35.532727  557899 start.go:293] postStartSetup for "addons-910958" (driver="docker")
	I1217 07:50:35.532741  557899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 07:50:35.532811  557899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 07:50:35.532859  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.551585  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:35.647828  557899 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 07:50:35.651757  557899 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 07:50:35.651783  557899 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 07:50:35.651796  557899 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 07:50:35.651854  557899 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 07:50:35.651879  557899 start.go:296] duration metric: took 119.145399ms for postStartSetup
	I1217 07:50:35.652169  557899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-910958
	I1217 07:50:35.670791  557899 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/config.json ...
	I1217 07:50:35.671099  557899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 07:50:35.671142  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.689155  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:35.779790  557899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 07:50:35.784575  557899 start.go:128] duration metric: took 12.530320153s to createHost
	I1217 07:50:35.784604  557899 start.go:83] releasing machines lock for "addons-910958", held for 12.530459547s
	I1217 07:50:35.784683  557899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-910958
	I1217 07:50:35.803772  557899 ssh_runner.go:195] Run: cat /version.json
	I1217 07:50:35.803825  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.803867  557899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 07:50:35.803954  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.822707  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:35.823672  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:35.966991  557899 ssh_runner.go:195] Run: systemctl --version
	I1217 07:50:35.973699  557899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 07:50:36.011200  557899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 07:50:36.016073  557899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 07:50:36.016129  557899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 07:50:36.044794  557899 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 07:50:36.044822  557899 start.go:496] detecting cgroup driver to use...
	I1217 07:50:36.044863  557899 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 07:50:36.044922  557899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 07:50:36.062196  557899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 07:50:36.075282  557899 docker.go:218] disabling cri-docker service (if available) ...
	I1217 07:50:36.075352  557899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 07:50:36.092785  557899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 07:50:36.110922  557899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 07:50:36.195914  557899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 07:50:36.285326  557899 docker.go:234] disabling docker service ...
	I1217 07:50:36.285432  557899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 07:50:36.305597  557899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 07:50:36.319468  557899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 07:50:36.402715  557899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 07:50:36.483204  557899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 07:50:36.496479  557899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 07:50:36.511549  557899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 07:50:36.511650  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.522602  557899 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 07:50:36.522668  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.531918  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.541846  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.551684  557899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 07:50:36.560619  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.569861  557899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.583595  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.592682  557899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 07:50:36.600651  557899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 07:50:36.608124  557899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 07:50:36.689794  557899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 07:50:36.832206  557899 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 07:50:36.832292  557899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 07:50:36.836412  557899 start.go:564] Will wait 60s for crictl version
	I1217 07:50:36.836472  557899 ssh_runner.go:195] Run: which crictl
	I1217 07:50:36.840333  557899 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 07:50:36.865737  557899 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 07:50:36.865831  557899 ssh_runner.go:195] Run: crio --version
	I1217 07:50:36.895056  557899 ssh_runner.go:195] Run: crio --version
	I1217 07:50:36.926549  557899 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 07:50:36.928026  557899 cli_runner.go:164] Run: docker network inspect addons-910958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 07:50:36.946500  557899 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 07:50:36.950796  557899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 07:50:36.961170  557899 kubeadm.go:884] updating cluster {Name:addons-910958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-910958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 07:50:36.961374  557899 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 07:50:36.961432  557899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 07:50:36.996134  557899 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 07:50:36.996155  557899 crio.go:433] Images already preloaded, skipping extraction
	I1217 07:50:36.996201  557899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 07:50:37.023161  557899 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 07:50:37.023186  557899 cache_images.go:86] Images are preloaded, skipping loading
	I1217 07:50:37.023195  557899 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 07:50:37.023289  557899 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-910958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-910958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 07:50:37.023373  557899 ssh_runner.go:195] Run: crio config
	I1217 07:50:37.069475  557899 cni.go:84] Creating CNI manager for ""
	I1217 07:50:37.069510  557899 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 07:50:37.069546  557899 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 07:50:37.069578  557899 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-910958 NodeName:addons-910958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 07:50:37.069724  557899 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-910958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 07:50:37.069789  557899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 07:50:37.078239  557899 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 07:50:37.078319  557899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 07:50:37.085947  557899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 07:50:37.099075  557899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 07:50:37.115144  557899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 07:50:37.128419  557899 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 07:50:37.132463  557899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 07:50:37.142744  557899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 07:50:37.223031  557899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 07:50:37.247014  557899 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958 for IP: 192.168.49.2
	I1217 07:50:37.247038  557899 certs.go:195] generating shared ca certs ...
	I1217 07:50:37.247066  557899 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.247208  557899 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 07:50:37.383987  557899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt ...
	I1217 07:50:37.384023  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt: {Name:mk070ca0ba13d83573609cb6f57680e38590740e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.384231  557899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key ...
	I1217 07:50:37.384248  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key: {Name:mk5ab23f07566032aa7d7528721f48743db4e09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.384354  557899 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 07:50:37.418404  557899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt ...
	I1217 07:50:37.418439  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt: {Name:mkf3379bdb5e7e03abf4cc4ccd466bba9355eae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.418639  557899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key ...
	I1217 07:50:37.418656  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key: {Name:mk6bce123259c1725aff073ddde7aa8d2e59fbfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.418769  557899 certs.go:257] generating profile certs ...
	I1217 07:50:37.418847  557899 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.key
	I1217 07:50:37.418871  557899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt with IP's: []
	I1217 07:50:37.475762  557899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt ...
	I1217 07:50:37.475797  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: {Name:mk68ac088f857d3e4471b7d1160c12ca2910613c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.476001  557899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.key ...
	I1217 07:50:37.476019  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.key: {Name:mk969df5b3fb391a002f85454e7b25bd5e33aa53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.476123  557899 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key.cfa0e06f
	I1217 07:50:37.476148  557899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt.cfa0e06f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 07:50:37.494018  557899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt.cfa0e06f ...
	I1217 07:50:37.494056  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt.cfa0e06f: {Name:mkd6ef75bf96733e6906730457c156c43906402b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.494245  557899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key.cfa0e06f ...
	I1217 07:50:37.494275  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key.cfa0e06f: {Name:mk6cdfafb37bbb110790ff2d6990099e317af7e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.494390  557899 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt.cfa0e06f -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt
	I1217 07:50:37.494514  557899 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key.cfa0e06f -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key
	I1217 07:50:37.494623  557899 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.key
	I1217 07:50:37.494652  557899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.crt with IP's: []
	I1217 07:50:37.589144  557899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.crt ...
	I1217 07:50:37.589182  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.crt: {Name:mk82f4f2a1f0c6f13d1d7c55ca4ac295e7f0b821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.589368  557899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.key ...
	I1217 07:50:37.589385  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.key: {Name:mk9c596c0e945185034e52c57a57a2fce10a889b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.589591  557899 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 07:50:37.589640  557899 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 07:50:37.589670  557899 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 07:50:37.589696  557899 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 07:50:37.590276  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 07:50:37.609123  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 07:50:37.627235  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 07:50:37.645600  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 07:50:37.663807  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 07:50:37.681797  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 07:50:37.699910  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 07:50:37.719007  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 07:50:37.737623  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 07:50:37.758314  557899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 07:50:37.771265  557899 ssh_runner.go:195] Run: openssl version
	I1217 07:50:37.777708  557899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 07:50:37.785443  557899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 07:50:37.796486  557899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 07:50:37.800445  557899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 07:50:37.800508  557899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 07:50:37.834672  557899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 07:50:37.842703  557899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 07:50:37.850361  557899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 07:50:37.854262  557899 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 07:50:37.854316  557899 kubeadm.go:401] StartCluster: {Name:addons-910958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-910958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 07:50:37.854428  557899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:50:37.854486  557899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:50:37.883512  557899 cri.go:89] found id: ""
	I1217 07:50:37.883622  557899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 07:50:37.892001  557899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 07:50:37.900624  557899 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 07:50:37.900718  557899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 07:50:37.909205  557899 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 07:50:37.909224  557899 kubeadm.go:158] found existing configuration files:
	
	I1217 07:50:37.909281  557899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 07:50:37.917288  557899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 07:50:37.917355  557899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 07:50:37.924786  557899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 07:50:37.932711  557899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 07:50:37.932800  557899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 07:50:37.940847  557899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 07:50:37.948736  557899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 07:50:37.948798  557899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 07:50:37.956276  557899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 07:50:37.964592  557899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 07:50:37.964662  557899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 07:50:37.972511  557899 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 07:50:38.038199  557899 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 07:50:38.101234  557899 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 07:50:47.536912  557899 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 07:50:47.536997  557899 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 07:50:47.537116  557899 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 07:50:47.537190  557899 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 07:50:47.537225  557899 kubeadm.go:319] OS: Linux
	I1217 07:50:47.537268  557899 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 07:50:47.537309  557899 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 07:50:47.537352  557899 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 07:50:47.537394  557899 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 07:50:47.537441  557899 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 07:50:47.537482  557899 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 07:50:47.537560  557899 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 07:50:47.537620  557899 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 07:50:47.537715  557899 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 07:50:47.537860  557899 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 07:50:47.537973  557899 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 07:50:47.538030  557899 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 07:50:47.539992  557899 out.go:252]   - Generating certificates and keys ...
	I1217 07:50:47.540064  557899 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 07:50:47.540117  557899 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 07:50:47.540187  557899 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 07:50:47.540255  557899 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 07:50:47.540309  557899 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 07:50:47.540366  557899 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 07:50:47.540411  557899 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 07:50:47.540522  557899 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-910958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 07:50:47.540590  557899 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 07:50:47.540703  557899 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-910958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 07:50:47.540766  557899 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 07:50:47.540858  557899 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 07:50:47.540941  557899 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 07:50:47.540997  557899 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 07:50:47.541043  557899 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 07:50:47.541092  557899 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 07:50:47.541138  557899 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 07:50:47.541199  557899 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 07:50:47.541247  557899 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 07:50:47.541323  557899 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 07:50:47.541379  557899 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 07:50:47.542897  557899 out.go:252]   - Booting up control plane ...
	I1217 07:50:47.543010  557899 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 07:50:47.543074  557899 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 07:50:47.543132  557899 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 07:50:47.543216  557899 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 07:50:47.543294  557899 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 07:50:47.543382  557899 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 07:50:47.543469  557899 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 07:50:47.543516  557899 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 07:50:47.543660  557899 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 07:50:47.543761  557899 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 07:50:47.543819  557899 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.053427ms
	I1217 07:50:47.543902  557899 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 07:50:47.543966  557899 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 07:50:47.544036  557899 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 07:50:47.544098  557899 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 07:50:47.544162  557899 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.609729191s
	I1217 07:50:47.544222  557899 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.431555577s
	I1217 07:50:47.544282  557899 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501367111s
	I1217 07:50:47.544365  557899 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 07:50:47.544474  557899 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 07:50:47.544544  557899 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 07:50:47.544708  557899 kubeadm.go:319] [mark-control-plane] Marking the node addons-910958 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 07:50:47.544755  557899 kubeadm.go:319] [bootstrap-token] Using token: kmd3fl.fb4wvkd0q8yiee8n
	I1217 07:50:47.546513  557899 out.go:252]   - Configuring RBAC rules ...
	I1217 07:50:47.546638  557899 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 07:50:47.546713  557899 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 07:50:47.546853  557899 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 07:50:47.546991  557899 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 07:50:47.547113  557899 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 07:50:47.547199  557899 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 07:50:47.547300  557899 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 07:50:47.547365  557899 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 07:50:47.547410  557899 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 07:50:47.547418  557899 kubeadm.go:319] 
	I1217 07:50:47.547467  557899 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 07:50:47.547475  557899 kubeadm.go:319] 
	I1217 07:50:47.547549  557899 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 07:50:47.547554  557899 kubeadm.go:319] 
	I1217 07:50:47.547574  557899 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 07:50:47.547632  557899 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 07:50:47.547674  557899 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 07:50:47.547683  557899 kubeadm.go:319] 
	I1217 07:50:47.547728  557899 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 07:50:47.547732  557899 kubeadm.go:319] 
	I1217 07:50:47.547776  557899 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 07:50:47.547788  557899 kubeadm.go:319] 
	I1217 07:50:47.547828  557899 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 07:50:47.547892  557899 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 07:50:47.547947  557899 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 07:50:47.547953  557899 kubeadm.go:319] 
	I1217 07:50:47.548017  557899 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 07:50:47.548084  557899 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 07:50:47.548089  557899 kubeadm.go:319] 
	I1217 07:50:47.548174  557899 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kmd3fl.fb4wvkd0q8yiee8n \
	I1217 07:50:47.548285  557899 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 07:50:47.548306  557899 kubeadm.go:319] 	--control-plane 
	I1217 07:50:47.548310  557899 kubeadm.go:319] 
	I1217 07:50:47.548384  557899 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 07:50:47.548396  557899 kubeadm.go:319] 
	I1217 07:50:47.548479  557899 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kmd3fl.fb4wvkd0q8yiee8n \
	I1217 07:50:47.548613  557899 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 07:50:47.548632  557899 cni.go:84] Creating CNI manager for ""
	I1217 07:50:47.548640  557899 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 07:50:47.550177  557899 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 07:50:47.551669  557899 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 07:50:47.556354  557899 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 07:50:47.556378  557899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 07:50:47.570185  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 07:50:47.790979  557899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 07:50:47.791148  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:47.791260  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-910958 minikube.k8s.io/updated_at=2025_12_17T07_50_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=addons-910958 minikube.k8s.io/primary=true
	I1217 07:50:47.805743  557899 ops.go:34] apiserver oom_adj: -16
	I1217 07:50:47.874568  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:48.374606  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:48.874938  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:49.375396  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:49.875177  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:50.374996  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:50.874691  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:51.375266  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:51.875071  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:52.374701  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:52.453413  557899 kubeadm.go:1114] duration metric: took 4.662308084s to wait for elevateKubeSystemPrivileges
	I1217 07:50:52.453457  557899 kubeadm.go:403] duration metric: took 14.59914402s to StartCluster
	I1217 07:50:52.453480  557899 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:52.453643  557899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 07:50:52.454176  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:52.454401  557899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 07:50:52.454457  557899 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 07:50:52.454566  557899 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 07:50:52.454727  557899 addons.go:70] Setting gcp-auth=true in profile "addons-910958"
	I1217 07:50:52.454743  557899 addons.go:70] Setting cloud-spanner=true in profile "addons-910958"
	I1217 07:50:52.454761  557899 addons.go:239] Setting addon cloud-spanner=true in "addons-910958"
	I1217 07:50:52.454767  557899 mustload.go:66] Loading cluster: addons-910958
	I1217 07:50:52.454778  557899 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-910958"
	I1217 07:50:52.454790  557899 addons.go:70] Setting registry=true in profile "addons-910958"
	I1217 07:50:52.454807  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454817  557899 addons.go:70] Setting volumesnapshots=true in profile "addons-910958"
	I1217 07:50:52.454808  557899 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-910958"
	I1217 07:50:52.454835  557899 addons.go:239] Setting addon registry=true in "addons-910958"
	I1217 07:50:52.454845  557899 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-910958"
	I1217 07:50:52.454847  557899 addons.go:70] Setting inspektor-gadget=true in profile "addons-910958"
	I1217 07:50:52.454858  557899 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-910958"
	I1217 07:50:52.454886  557899 addons.go:239] Setting addon inspektor-gadget=true in "addons-910958"
	I1217 07:50:52.454903  557899 addons.go:70] Setting ingress=true in profile "addons-910958"
	I1217 07:50:52.454911  557899 addons.go:70] Setting ingress-dns=true in profile "addons-910958"
	I1217 07:50:52.454915  557899 addons.go:239] Setting addon ingress=true in "addons-910958"
	I1217 07:50:52.454925  557899 addons.go:239] Setting addon ingress-dns=true in "addons-910958"
	I1217 07:50:52.454930  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454941  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454945  557899 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-910958"
	I1217 07:50:52.454949  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454729  557899 addons.go:70] Setting yakd=true in profile "addons-910958"
	I1217 07:50:52.454967  557899 addons.go:239] Setting addon yakd=true in "addons-910958"
	I1217 07:50:52.454984  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454989  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454800  557899 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-910958"
	I1217 07:50:52.455089  557899 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-910958"
	I1217 07:50:52.455184  557899 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-910958"
	I1217 07:50:52.455213  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.455349  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455401  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455455  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455463  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455566  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455588  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455825  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.454886  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454942  557899 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:50:52.454809  557899 addons.go:70] Setting volcano=true in profile "addons-910958"
	I1217 07:50:52.456585  557899 addons.go:239] Setting addon volcano=true in "addons-910958"
	I1217 07:50:52.456630  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454896  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454755  557899 addons.go:70] Setting registry-creds=true in profile "addons-910958"
	I1217 07:50:52.457131  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.457147  557899 addons.go:239] Setting addon registry-creds=true in "addons-910958"
	I1217 07:50:52.457187  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.455899  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.454771  557899 addons.go:70] Setting storage-provisioner=true in profile "addons-910958"
	I1217 07:50:52.457671  557899 addons.go:239] Setting addon storage-provisioner=true in "addons-910958"
	I1217 07:50:52.454836  557899 addons.go:239] Setting addon volumesnapshots=true in "addons-910958"
	I1217 07:50:52.454732  557899 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:50:52.457748  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.457707  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454895  557899 addons.go:70] Setting metrics-server=true in profile "addons-910958"
	I1217 07:50:52.457928  557899 addons.go:239] Setting addon metrics-server=true in "addons-910958"
	I1217 07:50:52.457967  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454949  557899 addons.go:70] Setting default-storageclass=true in profile "addons-910958"
	I1217 07:50:52.458246  557899 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-910958"
	I1217 07:50:52.459263  557899 out.go:179] * Verifying Kubernetes components...
	I1217 07:50:52.464543  557899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 07:50:52.468199  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.468424  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.469076  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.469092  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.469723  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.469786  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.470080  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.470934  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.521974  557899 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 07:50:52.523512  557899 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 07:50:52.526313  557899 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 07:50:52.526426  557899 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 07:50:52.526449  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 07:50:52.526634  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.528636  557899 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 07:50:52.528664  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 07:50:52.528747  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.529803  557899 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 07:50:52.533573  557899 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 07:50:52.533782  557899 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 07:50:52.535991  557899 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 07:50:52.537652  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 07:50:52.537854  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.536216  557899 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 07:50:52.538116  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 07:50:52.538179  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.540019  557899 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 07:50:52.542693  557899 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 07:50:52.542846  557899 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 07:50:52.542858  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 07:50:52.542936  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.546924  557899 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 07:50:52.546954  557899 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 07:50:52.547043  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.552188  557899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 07:50:52.554519  557899 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 07:50:52.554561  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 07:50:52.554667  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.577235  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.581624  557899 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1217 07:50:52.581802  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 07:50:52.583661  557899 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 07:50:52.583688  557899 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 07:50:52.583802  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 07:50:52.583875  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.587491  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 07:50:52.588088  557899 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-910958"
	I1217 07:50:52.588164  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.588726  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.592082  557899 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 07:50:52.593931  557899 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 07:50:52.593996  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 07:50:52.594079  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.597122  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 07:50:52.598209  557899 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 07:50:52.600092  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 07:50:52.601865  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 07:50:52.602211  557899 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 07:50:52.603319  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 07:50:52.604747  557899 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 07:50:52.604810  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 07:50:52.604902  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.605276  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 07:50:52.605506  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 07:50:52.605523  557899 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 07:50:52.605734  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.610109  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 07:50:52.611791  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 07:50:52.612075  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 07:50:52.612303  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.611957  557899 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 07:50:52.614525  557899 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 07:50:52.614590  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 07:50:52.614488  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.614664  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.638039  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.639233  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.642461  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.650471  557899 addons.go:239] Setting addon default-storageclass=true in "addons-910958"
	I1217 07:50:52.650529  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.651054  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	W1217 07:50:52.651601  557899 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 07:50:52.654179  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.656820  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.664715  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.680330  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.681226  557899 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 07:50:52.687168  557899 out.go:179]   - Using image docker.io/busybox:stable
	I1217 07:50:52.687776  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.688921  557899 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 07:50:52.688952  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 07:50:52.689025  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.689185  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.689404  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	W1217 07:50:52.694578  557899 sshutil.go:67] dial failure (will retry): ssh: handshake failed: EOF
	I1217 07:50:52.695001  557899 retry.go:31] will retry after 342.723363ms: ssh: handshake failed: EOF
	I1217 07:50:52.698743  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.704141  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.709749  557899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 07:50:52.713362  557899 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 07:50:52.713389  557899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 07:50:52.713459  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.746589  557899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 07:50:52.758624  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.758959  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.826018  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 07:50:52.849671  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 07:50:52.870154  557899 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 07:50:52.870184  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 07:50:52.886576  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 07:50:52.886618  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 07:50:52.900480  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 07:50:52.901389  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 07:50:52.902171  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 07:50:52.902301  557899 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 07:50:52.902310  557899 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 07:50:52.902380  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 07:50:52.903066  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 07:50:52.903632  557899 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 07:50:52.903652  557899 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 07:50:52.926256  557899 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 07:50:52.926315  557899 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 07:50:52.935425  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 07:50:52.939331  557899 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 07:50:52.939391  557899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 07:50:52.942256  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 07:50:52.956881  557899 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 07:50:52.956915  557899 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 07:50:52.958893  557899 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 07:50:52.958933  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 07:50:52.964117  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 07:50:52.964167  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 07:50:52.978728  557899 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 07:50:52.978765  557899 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 07:50:53.004156  557899 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 07:50:53.004197  557899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 07:50:53.011982  557899 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 07:50:53.012010  557899 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 07:50:53.036276  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 07:50:53.038726  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 07:50:53.040138  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 07:50:53.040164  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 07:50:53.053820  557899 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 07:50:53.053926  557899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 07:50:53.066932  557899 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 07:50:53.066960  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 07:50:53.106070  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 07:50:53.106186  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 07:50:53.110683  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 07:50:53.121222  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 07:50:53.121318  557899 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 07:50:53.132076  557899 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 07:50:53.134458  557899 node_ready.go:35] waiting up to 6m0s for node "addons-910958" to be "Ready" ...
	I1217 07:50:53.161211  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 07:50:53.161242  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 07:50:53.195676  557899 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 07:50:53.195709  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 07:50:53.254244  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 07:50:53.254275  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 07:50:53.287680  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 07:50:53.317298  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 07:50:53.324347  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 07:50:53.324379  557899 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 07:50:53.403377  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 07:50:53.403407  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 07:50:53.443880  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 07:50:53.443909  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 07:50:53.492557  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 07:50:53.492591  557899 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 07:50:53.558990  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 07:50:53.658814  557899 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-910958" context rescaled to 1 replicas
	I1217 07:50:54.275245  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.373036637s)
	I1217 07:50:54.275289  557899 addons.go:495] Verifying addon ingress=true in "addons-910958"
	I1217 07:50:54.275344  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.372248439s)
	I1217 07:50:54.275463  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.333173631s)
	I1217 07:50:54.275391  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.339919427s)
	I1217 07:50:54.275672  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.239357988s)
	I1217 07:50:54.275691  557899 addons.go:495] Verifying addon metrics-server=true in "addons-910958"
	I1217 07:50:54.275718  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.236971878s)
	I1217 07:50:54.275729  557899 addons.go:495] Verifying addon registry=true in "addons-910958"
	I1217 07:50:54.275829  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.165001056s)
	I1217 07:50:54.278348  557899 out.go:179] * Verifying ingress addon...
	I1217 07:50:54.278369  557899 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-910958 service yakd-dashboard -n yakd-dashboard
	
	I1217 07:50:54.278956  557899 out.go:179] * Verifying registry addon...
	I1217 07:50:54.281777  557899 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 07:50:54.282000  557899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 07:50:54.287716  557899 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 07:50:54.289116  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:54.287780  557899 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 07:50:54.289149  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 07:50:54.288968  557899 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1217 07:50:54.748801  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.461065045s)
	W1217 07:50:54.748867  557899 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 07:50:54.748903  557899 retry.go:31] will retry after 316.23395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 07:50:54.748950  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.431621525s)
	I1217 07:50:54.749182  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.190081215s)
	I1217 07:50:54.749200  557899 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-910958"
	I1217 07:50:54.754145  557899 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 07:50:54.757924  557899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 07:50:54.760929  557899 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 07:50:54.760949  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:54.862322  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:54.862400  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:55.065330  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1217 07:50:55.137890  557899 node_ready.go:57] node "addons-910958" has "Ready":"False" status (will retry)
	I1217 07:50:55.262135  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:55.285040  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:55.285720  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:55.761938  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:55.825524  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:55.825716  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:56.262213  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:56.285132  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:56.285371  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:56.761978  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:56.785259  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:56.862624  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 07:50:57.138337  557899 node_ready.go:57] node "addons-910958" has "Ready":"False" status (will retry)
	I1217 07:50:57.262166  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:57.284889  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:57.284938  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:57.565725  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.500342168s)
	I1217 07:50:57.762150  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:57.862787  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:57.862958  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:58.263625  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:58.285196  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:58.285403  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:58.762078  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:58.785385  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:58.863135  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:59.262184  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:59.284836  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:59.285077  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 07:50:59.638229  557899 node_ready.go:57] node "addons-910958" has "Ready":"False" status (will retry)
	I1217 07:50:59.761748  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:59.862040  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:59.862124  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:00.185852  557899 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 07:51:00.185924  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:51:00.203928  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:51:00.260791  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:00.285781  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:00.285969  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:00.319009  557899 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 07:51:00.332922  557899 addons.go:239] Setting addon gcp-auth=true in "addons-910958"
	I1217 07:51:00.332994  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:51:00.333369  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:51:00.352669  557899 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 07:51:00.352738  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:51:00.370639  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:51:00.463519  557899 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 07:51:00.464687  557899 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 07:51:00.465909  557899 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 07:51:00.465932  557899 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 07:51:00.480154  557899 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 07:51:00.480190  557899 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 07:51:00.494617  557899 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 07:51:00.494653  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 07:51:00.508807  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 07:51:00.762614  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:00.803815  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:00.803907  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:00.831570  557899 addons.go:495] Verifying addon gcp-auth=true in "addons-910958"
	I1217 07:51:00.833239  557899 out.go:179] * Verifying gcp-auth addon...
	I1217 07:51:00.835207  557899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 07:51:00.863804  557899 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 07:51:00.863827  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:01.262062  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:01.285059  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:01.285315  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:01.339165  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:01.761240  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:01.785361  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:01.785638  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:01.838055  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 07:51:02.138179  557899 node_ready.go:57] node "addons-910958" has "Ready":"False" status (will retry)
	I1217 07:51:02.261995  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:02.284956  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:02.285099  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:02.338929  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:02.761237  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:02.785162  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:02.785306  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:02.838986  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:03.260897  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:03.284723  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:03.284778  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:03.338411  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:03.761251  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:03.785096  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:03.785152  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:03.838936  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:04.261175  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:04.284897  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:04.285128  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:04.339281  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 07:51:04.637607  557899 node_ready.go:57] node "addons-910958" has "Ready":"False" status (will retry)
	I1217 07:51:04.761893  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:04.785972  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:04.786124  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:04.838720  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:05.262185  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:05.285180  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:05.285275  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:05.338782  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:05.761913  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:05.784830  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:05.785048  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:05.838959  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:06.137525  557899 node_ready.go:49] node "addons-910958" is "Ready"
	I1217 07:51:06.137570  557899 node_ready.go:38] duration metric: took 13.003085494s for node "addons-910958" to be "Ready" ...
	I1217 07:51:06.137589  557899 api_server.go:52] waiting for apiserver process to appear ...
	I1217 07:51:06.137647  557899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 07:51:06.151515  557899 api_server.go:72] duration metric: took 13.697011933s to wait for apiserver process to appear ...
	I1217 07:51:06.151571  557899 api_server.go:88] waiting for apiserver healthz status ...
	I1217 07:51:06.151601  557899 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 07:51:06.160399  557899 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 07:51:06.161428  557899 api_server.go:141] control plane version: v1.34.3
	I1217 07:51:06.161472  557899 api_server.go:131] duration metric: took 9.891339ms to wait for apiserver health ...
	I1217 07:51:06.161484  557899 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 07:51:06.165286  557899 system_pods.go:59] 20 kube-system pods found
	I1217 07:51:06.165320  557899 system_pods.go:61] "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 07:51:06.165327  557899 system_pods.go:61] "coredns-66bc5c9577-h9rb2" [d43c68bd-5403-4151-b511-f73845f506c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 07:51:06.165334  557899 system_pods.go:61] "csi-hostpath-attacher-0" [bb679d3d-c4e7-4e0f-9dbb-6d2be3a54c55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 07:51:06.165340  557899 system_pods.go:61] "csi-hostpath-resizer-0" [022f5fa0-146c-439f-ad02-c0f8caed089a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 07:51:06.165347  557899 system_pods.go:61] "csi-hostpathplugin-lmbsr" [6c21ba52-86ac-4b34-852e-79742b1c46e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 07:51:06.165355  557899 system_pods.go:61] "etcd-addons-910958" [9701ae90-486e-44c8-817d-5f05fa8ec294] Running
	I1217 07:51:06.165360  557899 system_pods.go:61] "kindnet-l7fvh" [d17dfd00-cbc1-4274-a202-061aeb1d4fd3] Running
	I1217 07:51:06.165367  557899 system_pods.go:61] "kube-apiserver-addons-910958" [c680b413-6c9c-4939-b58b-96547fad09b8] Running
	I1217 07:51:06.165371  557899 system_pods.go:61] "kube-controller-manager-addons-910958" [1a1e172f-08b9-4d67-923e-a056b8400193] Running
	I1217 07:51:06.165378  557899 system_pods.go:61] "kube-ingress-dns-minikube" [9098ada4-e384-40d9-93d8-45d83abd443b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 07:51:06.165382  557899 system_pods.go:61] "kube-proxy-rpkss" [364e3fac-b7cd-406d-9c77-a89197f547b4] Running
	I1217 07:51:06.165391  557899 system_pods.go:61] "kube-scheduler-addons-910958" [503509dd-54fd-4560-9969-a1e225f6c01b] Running
	I1217 07:51:06.165398  557899 system_pods.go:61] "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 07:51:06.165404  557899 system_pods.go:61] "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 07:51:06.165411  557899 system_pods.go:61] "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 07:51:06.165416  557899 system_pods.go:61] "registry-creds-764b6fb674-brbhv" [235047b9-19f8-440e-9443-a43977c33808] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 07:51:06.165424  557899 system_pods.go:61] "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 07:51:06.165429  557899 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pw726" [bcd57629-25e1-4e08-a9ed-2dc43e8cf336] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.165437  557899 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vkxh6" [bc741d21-8bf6-4117-abd7-988587a875a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.165442  557899 system_pods.go:61] "storage-provisioner" [4e21baf1-e2c1-4324-a832-597d88b47b24] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 07:51:06.165450  557899 system_pods.go:74] duration metric: took 3.959573ms to wait for pod list to return data ...
	I1217 07:51:06.165460  557899 default_sa.go:34] waiting for default service account to be created ...
	I1217 07:51:06.167752  557899 default_sa.go:45] found service account: "default"
	I1217 07:51:06.167770  557899 default_sa.go:55] duration metric: took 2.305215ms for default service account to be created ...
	I1217 07:51:06.167778  557899 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 07:51:06.170735  557899 system_pods.go:86] 20 kube-system pods found
	I1217 07:51:06.170763  557899 system_pods.go:89] "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 07:51:06.170772  557899 system_pods.go:89] "coredns-66bc5c9577-h9rb2" [d43c68bd-5403-4151-b511-f73845f506c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 07:51:06.170779  557899 system_pods.go:89] "csi-hostpath-attacher-0" [bb679d3d-c4e7-4e0f-9dbb-6d2be3a54c55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 07:51:06.170788  557899 system_pods.go:89] "csi-hostpath-resizer-0" [022f5fa0-146c-439f-ad02-c0f8caed089a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 07:51:06.170794  557899 system_pods.go:89] "csi-hostpathplugin-lmbsr" [6c21ba52-86ac-4b34-852e-79742b1c46e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 07:51:06.170800  557899 system_pods.go:89] "etcd-addons-910958" [9701ae90-486e-44c8-817d-5f05fa8ec294] Running
	I1217 07:51:06.170805  557899 system_pods.go:89] "kindnet-l7fvh" [d17dfd00-cbc1-4274-a202-061aeb1d4fd3] Running
	I1217 07:51:06.170811  557899 system_pods.go:89] "kube-apiserver-addons-910958" [c680b413-6c9c-4939-b58b-96547fad09b8] Running
	I1217 07:51:06.170815  557899 system_pods.go:89] "kube-controller-manager-addons-910958" [1a1e172f-08b9-4d67-923e-a056b8400193] Running
	I1217 07:51:06.170823  557899 system_pods.go:89] "kube-ingress-dns-minikube" [9098ada4-e384-40d9-93d8-45d83abd443b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 07:51:06.170826  557899 system_pods.go:89] "kube-proxy-rpkss" [364e3fac-b7cd-406d-9c77-a89197f547b4] Running
	I1217 07:51:06.170830  557899 system_pods.go:89] "kube-scheduler-addons-910958" [503509dd-54fd-4560-9969-a1e225f6c01b] Running
	I1217 07:51:06.170834  557899 system_pods.go:89] "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 07:51:06.170842  557899 system_pods.go:89] "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 07:51:06.170849  557899 system_pods.go:89] "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 07:51:06.170854  557899 system_pods.go:89] "registry-creds-764b6fb674-brbhv" [235047b9-19f8-440e-9443-a43977c33808] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 07:51:06.170863  557899 system_pods.go:89] "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 07:51:06.170870  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw726" [bcd57629-25e1-4e08-a9ed-2dc43e8cf336] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.170876  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkxh6" [bc741d21-8bf6-4117-abd7-988587a875a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.170884  557899 system_pods.go:89] "storage-provisioner" [4e21baf1-e2c1-4324-a832-597d88b47b24] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 07:51:06.170920  557899 retry.go:31] will retry after 279.559466ms: missing components: kube-dns
	I1217 07:51:06.266476  557899 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 07:51:06.266502  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:06.369733  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:06.369746  557899 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 07:51:06.369780  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:06.369748  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:06.471996  557899 system_pods.go:86] 20 kube-system pods found
	I1217 07:51:06.472046  557899 system_pods.go:89] "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 07:51:06.472057  557899 system_pods.go:89] "coredns-66bc5c9577-h9rb2" [d43c68bd-5403-4151-b511-f73845f506c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 07:51:06.472067  557899 system_pods.go:89] "csi-hostpath-attacher-0" [bb679d3d-c4e7-4e0f-9dbb-6d2be3a54c55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 07:51:06.472078  557899 system_pods.go:89] "csi-hostpath-resizer-0" [022f5fa0-146c-439f-ad02-c0f8caed089a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 07:51:06.472088  557899 system_pods.go:89] "csi-hostpathplugin-lmbsr" [6c21ba52-86ac-4b34-852e-79742b1c46e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 07:51:06.472094  557899 system_pods.go:89] "etcd-addons-910958" [9701ae90-486e-44c8-817d-5f05fa8ec294] Running
	I1217 07:51:06.472100  557899 system_pods.go:89] "kindnet-l7fvh" [d17dfd00-cbc1-4274-a202-061aeb1d4fd3] Running
	I1217 07:51:06.472106  557899 system_pods.go:89] "kube-apiserver-addons-910958" [c680b413-6c9c-4939-b58b-96547fad09b8] Running
	I1217 07:51:06.472113  557899 system_pods.go:89] "kube-controller-manager-addons-910958" [1a1e172f-08b9-4d67-923e-a056b8400193] Running
	I1217 07:51:06.472121  557899 system_pods.go:89] "kube-ingress-dns-minikube" [9098ada4-e384-40d9-93d8-45d83abd443b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 07:51:06.472126  557899 system_pods.go:89] "kube-proxy-rpkss" [364e3fac-b7cd-406d-9c77-a89197f547b4] Running
	I1217 07:51:06.472131  557899 system_pods.go:89] "kube-scheduler-addons-910958" [503509dd-54fd-4560-9969-a1e225f6c01b] Running
	I1217 07:51:06.472139  557899 system_pods.go:89] "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 07:51:06.472148  557899 system_pods.go:89] "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 07:51:06.472156  557899 system_pods.go:89] "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 07:51:06.472163  557899 system_pods.go:89] "registry-creds-764b6fb674-brbhv" [235047b9-19f8-440e-9443-a43977c33808] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 07:51:06.472171  557899 system_pods.go:89] "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 07:51:06.472189  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw726" [bcd57629-25e1-4e08-a9ed-2dc43e8cf336] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.472200  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkxh6" [bc741d21-8bf6-4117-abd7-988587a875a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.472207  557899 system_pods.go:89] "storage-provisioner" [4e21baf1-e2c1-4324-a832-597d88b47b24] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 07:51:06.472229  557899 retry.go:31] will retry after 339.447836ms: missing components: kube-dns
	I1217 07:51:06.762160  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:06.784981  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:06.785199  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:06.815938  557899 system_pods.go:86] 20 kube-system pods found
	I1217 07:51:06.815976  557899 system_pods.go:89] "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 07:51:06.815984  557899 system_pods.go:89] "coredns-66bc5c9577-h9rb2" [d43c68bd-5403-4151-b511-f73845f506c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 07:51:06.816006  557899 system_pods.go:89] "csi-hostpath-attacher-0" [bb679d3d-c4e7-4e0f-9dbb-6d2be3a54c55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 07:51:06.816016  557899 system_pods.go:89] "csi-hostpath-resizer-0" [022f5fa0-146c-439f-ad02-c0f8caed089a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 07:51:06.816026  557899 system_pods.go:89] "csi-hostpathplugin-lmbsr" [6c21ba52-86ac-4b34-852e-79742b1c46e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 07:51:06.816033  557899 system_pods.go:89] "etcd-addons-910958" [9701ae90-486e-44c8-817d-5f05fa8ec294] Running
	I1217 07:51:06.816041  557899 system_pods.go:89] "kindnet-l7fvh" [d17dfd00-cbc1-4274-a202-061aeb1d4fd3] Running
	I1217 07:51:06.816051  557899 system_pods.go:89] "kube-apiserver-addons-910958" [c680b413-6c9c-4939-b58b-96547fad09b8] Running
	I1217 07:51:06.816054  557899 system_pods.go:89] "kube-controller-manager-addons-910958" [1a1e172f-08b9-4d67-923e-a056b8400193] Running
	I1217 07:51:06.816061  557899 system_pods.go:89] "kube-ingress-dns-minikube" [9098ada4-e384-40d9-93d8-45d83abd443b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 07:51:06.816065  557899 system_pods.go:89] "kube-proxy-rpkss" [364e3fac-b7cd-406d-9c77-a89197f547b4] Running
	I1217 07:51:06.816071  557899 system_pods.go:89] "kube-scheduler-addons-910958" [503509dd-54fd-4560-9969-a1e225f6c01b] Running
	I1217 07:51:06.816079  557899 system_pods.go:89] "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 07:51:06.816084  557899 system_pods.go:89] "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 07:51:06.816094  557899 system_pods.go:89] "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 07:51:06.816100  557899 system_pods.go:89] "registry-creds-764b6fb674-brbhv" [235047b9-19f8-440e-9443-a43977c33808] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 07:51:06.816109  557899 system_pods.go:89] "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 07:51:06.816120  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw726" [bcd57629-25e1-4e08-a9ed-2dc43e8cf336] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.816131  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkxh6" [bc741d21-8bf6-4117-abd7-988587a875a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.816153  557899 system_pods.go:89] "storage-provisioner" [4e21baf1-e2c1-4324-a832-597d88b47b24] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 07:51:06.816176  557899 retry.go:31] will retry after 424.196304ms: missing components: kube-dns
	I1217 07:51:06.838101  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:07.245135  557899 system_pods.go:86] 20 kube-system pods found
	I1217 07:51:07.245178  557899 system_pods.go:89] "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 07:51:07.245186  557899 system_pods.go:89] "coredns-66bc5c9577-h9rb2" [d43c68bd-5403-4151-b511-f73845f506c7] Running
	I1217 07:51:07.245197  557899 system_pods.go:89] "csi-hostpath-attacher-0" [bb679d3d-c4e7-4e0f-9dbb-6d2be3a54c55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 07:51:07.245214  557899 system_pods.go:89] "csi-hostpath-resizer-0" [022f5fa0-146c-439f-ad02-c0f8caed089a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 07:51:07.245223  557899 system_pods.go:89] "csi-hostpathplugin-lmbsr" [6c21ba52-86ac-4b34-852e-79742b1c46e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 07:51:07.245229  557899 system_pods.go:89] "etcd-addons-910958" [9701ae90-486e-44c8-817d-5f05fa8ec294] Running
	I1217 07:51:07.245236  557899 system_pods.go:89] "kindnet-l7fvh" [d17dfd00-cbc1-4274-a202-061aeb1d4fd3] Running
	I1217 07:51:07.245243  557899 system_pods.go:89] "kube-apiserver-addons-910958" [c680b413-6c9c-4939-b58b-96547fad09b8] Running
	I1217 07:51:07.245250  557899 system_pods.go:89] "kube-controller-manager-addons-910958" [1a1e172f-08b9-4d67-923e-a056b8400193] Running
	I1217 07:51:07.245265  557899 system_pods.go:89] "kube-ingress-dns-minikube" [9098ada4-e384-40d9-93d8-45d83abd443b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 07:51:07.245271  557899 system_pods.go:89] "kube-proxy-rpkss" [364e3fac-b7cd-406d-9c77-a89197f547b4] Running
	I1217 07:51:07.245278  557899 system_pods.go:89] "kube-scheduler-addons-910958" [503509dd-54fd-4560-9969-a1e225f6c01b] Running
	I1217 07:51:07.245289  557899 system_pods.go:89] "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 07:51:07.245299  557899 system_pods.go:89] "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 07:51:07.245322  557899 system_pods.go:89] "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 07:51:07.245331  557899 system_pods.go:89] "registry-creds-764b6fb674-brbhv" [235047b9-19f8-440e-9443-a43977c33808] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 07:51:07.245349  557899 system_pods.go:89] "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 07:51:07.245364  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw726" [bcd57629-25e1-4e08-a9ed-2dc43e8cf336] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:07.245377  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkxh6" [bc741d21-8bf6-4117-abd7-988587a875a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:07.245384  557899 system_pods.go:89] "storage-provisioner" [4e21baf1-e2c1-4324-a832-597d88b47b24] Running
	I1217 07:51:07.245399  557899 system_pods.go:126] duration metric: took 1.077612298s to wait for k8s-apps to be running ...
	I1217 07:51:07.245413  557899 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 07:51:07.245469  557899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 07:51:07.259965  557899 system_svc.go:56] duration metric: took 14.539685ms WaitForService to wait for kubelet
	I1217 07:51:07.259997  557899 kubeadm.go:587] duration metric: took 14.805501605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 07:51:07.260020  557899 node_conditions.go:102] verifying NodePressure condition ...
	I1217 07:51:07.261319  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:07.262615  557899 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 07:51:07.262641  557899 node_conditions.go:123] node cpu capacity is 8
	I1217 07:51:07.262674  557899 node_conditions.go:105] duration metric: took 2.647793ms to run NodePressure ...
	I1217 07:51:07.262697  557899 start.go:242] waiting for startup goroutines ...
	I1217 07:51:07.285508  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:07.285502  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:07.344342  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:07.761770  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:07.785908  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:07.785942  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:07.839983  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:08.261768  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:08.285348  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:08.285377  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:08.337986  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:08.761500  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:08.785207  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:08.785332  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:08.837978  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:09.262075  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:09.284808  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:09.284935  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:09.338720  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:09.762380  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:09.789428  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:09.789497  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:09.838866  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:10.263527  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:10.286332  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:10.286352  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:10.339181  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:10.763166  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:10.785250  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:10.785337  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:10.839683  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:11.265128  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:11.285242  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:11.285263  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:11.339341  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:11.761945  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:11.786065  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:11.786274  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:11.838611  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:12.262148  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:12.285202  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:12.285473  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:12.338143  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:12.762431  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:12.785431  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:12.785497  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:12.838496  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:13.262574  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:13.285717  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:13.285808  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:13.339169  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:13.761858  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:13.785987  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:13.785992  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:13.838954  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:14.261517  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:14.286217  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:14.286392  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:14.339104  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:14.762921  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:14.786339  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:14.786837  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:14.838693  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:15.262458  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:15.285867  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:15.286032  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:15.338960  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:15.763071  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:15.785874  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:15.786910  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:15.838684  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:16.261491  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:16.285380  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:16.285396  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:16.338312  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:16.762401  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:16.785636  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:16.785681  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:16.838420  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:17.262205  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:17.284862  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:17.284901  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:17.338820  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:17.762088  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:17.786218  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:17.786267  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:17.839076  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:18.262239  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:18.285416  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:18.285504  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:18.339206  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:18.762135  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:18.785167  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:18.785974  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:18.838813  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:19.261942  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:19.286240  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:19.286487  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:19.339025  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:19.761187  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:19.785126  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:19.785290  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:19.838986  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:20.261507  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:20.285289  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:20.285452  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:20.339111  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:20.761896  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:20.785863  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:20.785890  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:20.838211  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:21.262433  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:21.285599  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:21.285702  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:21.338649  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:21.762157  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:21.862729  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:21.862851  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:21.863039  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:22.263099  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:22.285029  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:22.285030  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:22.339262  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:22.762274  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:22.785969  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:22.786090  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:22.838822  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:23.261692  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:23.285422  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:23.285549  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:23.338528  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:23.762317  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:23.785367  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:23.785418  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:23.838771  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:24.261273  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:24.284945  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:24.285153  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:24.338701  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:24.761989  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:24.786228  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:24.786421  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:24.839307  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:25.262097  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:25.284764  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:25.284784  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:25.338354  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:25.762991  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:25.785903  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:25.785976  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:25.839041  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:26.262164  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:26.285001  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:26.285072  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:26.339206  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:26.762131  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:26.862264  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:26.862366  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:26.862418  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:27.262369  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:27.363103  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:27.363131  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:27.363296  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:27.761795  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:27.786346  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:27.786368  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:27.839318  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:28.262048  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:28.285997  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:28.286349  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:28.338178  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:28.761864  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:28.785673  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:28.785735  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:28.838315  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:29.342662  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:29.342894  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:29.343075  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:29.343217  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:29.762021  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:29.784919  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:29.785065  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:29.838759  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:30.261859  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:30.285141  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:30.285353  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:30.339107  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:30.762735  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:30.786292  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:30.787359  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:30.840613  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:31.262155  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:31.286078  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:31.286192  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:31.339254  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:31.762347  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:31.784998  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:31.785132  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:31.839185  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:32.261953  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:32.285433  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:32.285510  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:32.338234  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:32.762947  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:32.785775  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:32.785821  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:32.838728  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:33.261417  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:33.285208  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:33.285395  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:33.339270  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:33.761944  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:33.784939  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:33.785037  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:33.838672  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:34.262667  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:34.285923  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:34.286034  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:34.338863  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:34.761792  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:34.862433  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:34.862587  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:34.862617  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:35.262265  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:35.285091  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:35.285141  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:35.338832  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:35.761494  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:35.785355  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:35.785421  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:35.838155  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:36.261991  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:36.284962  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:36.285169  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:36.338853  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:36.762135  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:36.785722  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:36.785857  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:36.838918  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:37.262039  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:37.285200  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:37.285383  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:37.338915  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:37.761473  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:37.785271  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:37.785311  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:37.861976  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:38.261586  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:38.285492  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:38.285502  557899 kapi.go:107] duration metric: took 44.003500213s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 07:51:38.338345  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:38.762869  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:38.785955  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:38.838952  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:39.262852  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:39.286068  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:39.339315  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:39.762196  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:39.785598  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:39.838770  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:40.262298  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:40.285818  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:40.339025  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:40.764327  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:40.787038  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:40.840028  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:41.263268  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:41.285819  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:41.342841  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:41.762800  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:41.862664  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:41.862734  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:42.262253  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:42.285445  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:42.339172  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:42.762726  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:42.785916  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:42.839042  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:43.262223  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:43.284991  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:43.338980  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:43.762556  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:43.785396  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:43.838193  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:44.261694  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:44.285516  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:44.338191  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:44.762280  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:44.785273  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:44.839454  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:45.262345  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:45.285318  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:45.363289  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:45.762013  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:45.784852  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:45.838969  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:46.261800  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:46.285514  557899 kapi.go:107] duration metric: took 52.003736636s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 07:51:46.338082  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:46.762264  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:46.838822  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:47.262285  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:47.338359  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:47.762090  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:47.839074  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:48.285126  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:48.384013  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:48.762967  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:48.862978  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:49.261980  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:49.338892  557899 kapi.go:107] duration metric: took 48.503677997s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 07:51:49.341402  557899 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-910958 cluster.
	I1217 07:51:49.343029  557899 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 07:51:49.344663  557899 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 07:51:49.762077  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:50.261904  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:50.762419  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:51.262416  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:51.761694  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:52.262323  557899 kapi.go:107] duration metric: took 57.504400189s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 07:51:52.264767  557899 out.go:179] * Enabled addons: ingress-dns, cloud-spanner, registry-creds, storage-provisioner, amd-gpu-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, nvidia-device-plugin, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1217 07:51:52.265923  557899 addons.go:530] duration metric: took 59.811358989s for enable addons: enabled=[ingress-dns cloud-spanner registry-creds storage-provisioner amd-gpu-device-plugin inspektor-gadget metrics-server yakd default-storageclass nvidia-device-plugin volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1217 07:51:52.265970  557899 start.go:247] waiting for cluster config update ...
	I1217 07:51:52.265990  557899 start.go:256] writing updated cluster config ...
	I1217 07:51:52.266261  557899 ssh_runner.go:195] Run: rm -f paused
	I1217 07:51:52.270482  557899 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 07:51:52.273681  557899 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h9rb2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.277789  557899 pod_ready.go:94] pod "coredns-66bc5c9577-h9rb2" is "Ready"
	I1217 07:51:52.277812  557899 pod_ready.go:86] duration metric: took 4.107297ms for pod "coredns-66bc5c9577-h9rb2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.279867  557899 pod_ready.go:83] waiting for pod "etcd-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.283612  557899 pod_ready.go:94] pod "etcd-addons-910958" is "Ready"
	I1217 07:51:52.283635  557899 pod_ready.go:86] duration metric: took 3.743448ms for pod "etcd-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.285476  557899 pod_ready.go:83] waiting for pod "kube-apiserver-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.289126  557899 pod_ready.go:94] pod "kube-apiserver-addons-910958" is "Ready"
	I1217 07:51:52.289152  557899 pod_ready.go:86] duration metric: took 3.653735ms for pod "kube-apiserver-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.290879  557899 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.674237  557899 pod_ready.go:94] pod "kube-controller-manager-addons-910958" is "Ready"
	I1217 07:51:52.674268  557899 pod_ready.go:86] duration metric: took 383.368833ms for pod "kube-controller-manager-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.874759  557899 pod_ready.go:83] waiting for pod "kube-proxy-rpkss" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:53.275056  557899 pod_ready.go:94] pod "kube-proxy-rpkss" is "Ready"
	I1217 07:51:53.275086  557899 pod_ready.go:86] duration metric: took 400.298982ms for pod "kube-proxy-rpkss" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:53.474740  557899 pod_ready.go:83] waiting for pod "kube-scheduler-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:53.875030  557899 pod_ready.go:94] pod "kube-scheduler-addons-910958" is "Ready"
	I1217 07:51:53.875060  557899 pod_ready.go:86] duration metric: took 400.287387ms for pod "kube-scheduler-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:53.875077  557899 pod_ready.go:40] duration metric: took 1.604557482s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 07:51:53.921305  557899 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 07:51:53.924461  557899 out.go:179] * Done! kubectl is now configured to use "addons-910958" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 07:53:23 addons-910958 crio[772]: time="2025-12-17T07:53:23.797975847Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=257533c0-47e7-410c-8b50-7fb37b40752c name=/runtime.v1.ImageService/PullImage
	Dec 17 07:53:23 addons-910958 crio[772]: time="2025-12-17T07:53:23.802397638Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Dec 17 07:53:25 addons-910958 crio[772]: time="2025-12-17T07:53:25.70292886Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=257533c0-47e7-410c-8b50-7fb37b40752c name=/runtime.v1.ImageService/PullImage
	Dec 17 07:53:25 addons-910958 crio[772]: time="2025-12-17T07:53:25.703771276Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=84d23aaa-0a8e-4029-b02f-348b740a9ce0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 07:53:25 addons-910958 crio[772]: time="2025-12-17T07:53:25.738321607Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=64994bb0-5288-4811-ade5-461b8661b2c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 07:53:25 addons-910958 crio[772]: time="2025-12-17T07:53:25.743587331Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-brbhv/registry-creds" id=7aec1443-aff9-4ea7-a730-9ccd710fff44 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 07:53:25 addons-910958 crio[772]: time="2025-12-17T07:53:25.743743268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 07:53:25 addons-910958 crio[772]: time="2025-12-17T07:53:25.75138136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 07:53:25 addons-910958 crio[772]: time="2025-12-17T07:53:25.752094551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 07:53:25 addons-910958 crio[772]: time="2025-12-17T07:53:25.796964228Z" level=info msg="Created container 8fe4b4b9b6e2aaf607b9b5cfeed98eb04f4018774869c541f587002bc49b55c1: kube-system/registry-creds-764b6fb674-brbhv/registry-creds" id=7aec1443-aff9-4ea7-a730-9ccd710fff44 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 07:53:25 addons-910958 crio[772]: time="2025-12-17T07:53:25.79782587Z" level=info msg="Starting container: 8fe4b4b9b6e2aaf607b9b5cfeed98eb04f4018774869c541f587002bc49b55c1" id=9b3588e9-fa87-41c3-b783-30eb775466ba name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 07:53:25 addons-910958 crio[772]: time="2025-12-17T07:53:25.800076228Z" level=info msg="Started container" PID=8853 containerID=8fe4b4b9b6e2aaf607b9b5cfeed98eb04f4018774869c541f587002bc49b55c1 description=kube-system/registry-creds-764b6fb674-brbhv/registry-creds id=9b3588e9-fa87-41c3-b783-30eb775466ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4eba24ae7efcfa20fa20f4681104edb7e957a16f7700134a2fbd2e52f24bba1
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.589748805Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-b82sq/POD" id=109971e2-e88b-47aa-a628-c89515df8c30 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.589843667Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.596572945Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-b82sq Namespace:default ID:f04752b6ba610d32414f01523f6b06b8b75b1d961c9c60f17de7a9c99d50f5d2 UID:aa9df58c-9ba6-4e49-baad-5f3ed3e63588 NetNS:/var/run/netns/83475ed4-1ad9-4d5b-8352-bbb724b764d9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00051e470}] Aliases:map[]}"
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.596609848Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-b82sq to CNI network \"kindnet\" (type=ptp)"
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.60763952Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-b82sq Namespace:default ID:f04752b6ba610d32414f01523f6b06b8b75b1d961c9c60f17de7a9c99d50f5d2 UID:aa9df58c-9ba6-4e49-baad-5f3ed3e63588 NetNS:/var/run/netns/83475ed4-1ad9-4d5b-8352-bbb724b764d9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00051e470}] Aliases:map[]}"
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.607788409Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-b82sq for CNI network kindnet (type=ptp)"
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.60869563Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.60949126Z" level=info msg="Ran pod sandbox f04752b6ba610d32414f01523f6b06b8b75b1d961c9c60f17de7a9c99d50f5d2 with infra container: default/hello-world-app-5d498dc89-b82sq/POD" id=109971e2-e88b-47aa-a628-c89515df8c30 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.610908126Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=276b8209-7e9c-4e64-ac97-041e856ab772 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.611037472Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=276b8209-7e9c-4e64-ac97-041e856ab772 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.611068602Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=276b8209-7e9c-4e64-ac97-041e856ab772 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.611859947Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=19f73238-6184-436b-ae9f-9d3275e34a93 name=/runtime.v1.ImageService/PullImage
	Dec 17 07:54:33 addons-910958 crio[772]: time="2025-12-17T07:54:33.622073381Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	8fe4b4b9b6e2a       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   f4eba24ae7efc       registry-creds-764b6fb674-brbhv             kube-system
	906869232d8ef       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago        Running             nginx                                    0                   6ca3a722de416       nginx                                       default
	f9e1daa3443dd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   7ad4acd0119d3       busybox                                     default
	c35c7accdf21b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	2c42903f322a2       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	d3dd2ef7e0cca       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	86a876957c74c       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	d56cf4295e0dc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   837e6ac3d76f9       gcp-auth-78565c9fb4-r29vk                   gcp-auth
	eb06595f89410       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago        Running             controller                               0                   b30b2753d50c4       ingress-nginx-controller-85d4c799dd-tnjvc   ingress-nginx
	321626501fabd       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	5cdb117a00344       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago        Running             gadget                                   0                   23e2b63486a5f       gadget-g8wb6                                gadget
	7cf58434aeec3       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago        Running             registry-proxy                           0                   4e200115ba4e6       registry-proxy-x5kj2                        kube-system
	d6ff70f629b0e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago        Running             volume-snapshot-controller               0                   3cdf6ae83031d       snapshot-controller-7d9fbc56b8-vkxh6        kube-system
	ff2d1c6802978       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago        Running             csi-external-health-monitor-controller   0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	4f418a0d25246       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   5d07f39830c88       amd-gpu-device-plugin-sq4qp                 kube-system
	79702ad3f3aed       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   491225c035b0c       nvidia-device-plugin-daemonset-vwl8f        kube-system
	ad4f1d71a3d82       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   b162e20b7a9c8       snapshot-controller-7d9fbc56b8-pw726        kube-system
	7700b77e1b687       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             3 minutes ago        Exited              patch                                    1                   13705ce510ad2       ingress-nginx-admission-patch-5kqv5         ingress-nginx
	4980d741df68d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago        Exited              create                                   0                   3718fbeb80de4       ingress-nginx-admission-create-2r822        ingress-nginx
	b48f70dc9cbcf       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   aa87d937a26bb       csi-hostpath-resizer-0                      kube-system
	c2a123dbc0656       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   8c1c18288197d       local-path-provisioner-648f6765c9-78mbr     local-path-storage
	98064f5bc77f6       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              3 minutes ago        Running             yakd                                     0                   75851cf78637d       yakd-dashboard-6654c87f9b-th7hc             yakd-dashboard
	77f92e3cf18c9       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago        Running             cloud-spanner-emulator                   0                   d4b599a468eb7       cloud-spanner-emulator-5bdddb765-p6tr4      default
	1520830ff9484       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   05564b9fd9843       csi-hostpath-attacher-0                     kube-system
	b05e54700098d       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   2ac911b7b69d2       metrics-server-85b7d694d7-9j26h             kube-system
	152830796e570       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   847726c96b580       kube-ingress-dns-minikube                   kube-system
	4f95ea3dd74c9       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   a2bf45f97dec3       registry-6b586f9694-hn4rs                   kube-system
	ad02540b0f2f5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   63a2754046592       coredns-66bc5c9577-h9rb2                    kube-system
	b6e3631773200       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   7b676b1fad4e1       storage-provisioner                         kube-system
	f0f35e9b0c091       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           3 minutes ago        Running             kindnet-cni                              0                   af575f948a2d2       kindnet-l7fvh                               kube-system
	08a3cfad5dcdf       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             3 minutes ago        Running             kube-proxy                               0                   4ae3efecb62df       kube-proxy-rpkss                            kube-system
	f001874e31dcf       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             3 minutes ago        Running             etcd                                     0                   6812fc3e6aa5b       etcd-addons-910958                          kube-system
	34a5e7ca13b08       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             3 minutes ago        Running             kube-apiserver                           0                   a5b746f1cedd3       kube-apiserver-addons-910958                kube-system
	d4daacfac93d4       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             3 minutes ago        Running             kube-controller-manager                  0                   f6c7311e4b628       kube-controller-manager-addons-910958       kube-system
	e3b8c740226a7       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             3 minutes ago        Running             kube-scheduler                           0                   25a480f7b7ef9       kube-scheduler-addons-910958                kube-system
	
	
	==> coredns [ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77] <==
	[INFO] 10.244.0.22:53481 - 27853 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000205748s
	[INFO] 10.244.0.22:35745 - 38319 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.007037629s
	[INFO] 10.244.0.22:55858 - 18939 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00821733s
	[INFO] 10.244.0.22:51405 - 28679 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005487597s
	[INFO] 10.244.0.22:41953 - 57042 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005936126s
	[INFO] 10.244.0.22:36185 - 46787 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003776326s
	[INFO] 10.244.0.22:46666 - 59586 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004715707s
	[INFO] 10.244.0.22:47386 - 11921 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000985998s
	[INFO] 10.244.0.22:33271 - 37073 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00218119s
	[INFO] 10.244.0.25:52299 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000221441s
	[INFO] 10.244.0.25:34649 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000195752s
	[INFO] 10.244.0.31:56338 - 47212 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000233957s
	[INFO] 10.244.0.31:36082 - 17431 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000315272s
	[INFO] 10.244.0.31:50750 - 7450 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000156525s
	[INFO] 10.244.0.31:41806 - 31098 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000205328s
	[INFO] 10.244.0.31:45836 - 56545 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000095778s
	[INFO] 10.244.0.31:37352 - 53081 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000150644s
	[INFO] 10.244.0.31:51738 - 62232 "AAAA IN accounts.google.com.europe-west1-b.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.00490223s
	[INFO] 10.244.0.31:32805 - 23571 "A IN accounts.google.com.europe-west1-b.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005027044s
	[INFO] 10.244.0.31:35887 - 45158 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004182322s
	[INFO] 10.244.0.31:35637 - 12306 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004247118s
	[INFO] 10.244.0.31:46313 - 10820 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004356249s
	[INFO] 10.244.0.31:49564 - 21634 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004403385s
	[INFO] 10.244.0.31:38683 - 9861 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001765645s
	[INFO] 10.244.0.31:47303 - 39840 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001860728s
	
	
	==> describe nodes <==
	Name:               addons-910958
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-910958
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=addons-910958
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T07_50_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-910958
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-910958"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 07:50:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-910958
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 07:54:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 07:54:21 +0000   Wed, 17 Dec 2025 07:50:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 07:54:21 +0000   Wed, 17 Dec 2025 07:50:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 07:54:21 +0000   Wed, 17 Dec 2025 07:50:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 07:54:21 +0000   Wed, 17 Dec 2025 07:51:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-910958
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                f52214bb-3910-469d-8d45-568e2170d4b7
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  default                     cloud-spanner-emulator-5bdddb765-p6tr4       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  default                     hello-world-app-5d498dc89-b82sq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-g8wb6                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  gcp-auth                    gcp-auth-78565c9fb4-r29vk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-tnjvc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m40s
	  kube-system                 amd-gpu-device-plugin-sq4qp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 coredns-66bc5c9577-h9rb2                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m42s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 csi-hostpathplugin-lmbsr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 etcd-addons-910958                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m48s
	  kube-system                 kindnet-l7fvh                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m42s
	  kube-system                 kube-apiserver-addons-910958                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 kube-controller-manager-addons-910958        200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 kube-proxy-rpkss                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 kube-scheduler-addons-910958                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 metrics-server-85b7d694d7-9j26h              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m41s
	  kube-system                 nvidia-device-plugin-daemonset-vwl8f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 registry-6b586f9694-hn4rs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 registry-creds-764b6fb674-brbhv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 registry-proxy-x5kj2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 snapshot-controller-7d9fbc56b8-pw726         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 snapshot-controller-7d9fbc56b8-vkxh6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  local-path-storage          local-path-provisioner-648f6765c9-78mbr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-th7hc              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node addons-910958 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node addons-910958 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x8 over 3m52s)  kubelet          Node addons-910958 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s                  kubelet          Node addons-910958 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s                  kubelet          Node addons-910958 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s                  kubelet          Node addons-910958 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m43s                  node-controller  Node addons-910958 event: Registered Node addons-910958 in Controller
	  Normal  NodeReady                3m29s                  kubelet          Node addons-910958 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 17 bb 9f 9a 4b 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 91 37 97 9f 01 08 06
	[Dec17 07:52] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.033977] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.024926] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.022908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.023867] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +2.047880] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +4.032673] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +8.190487] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[ +16.382857] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	
	
	==> etcd [f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54] <==
	{"level":"warn","ts":"2025-12-17T07:50:43.899280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.907255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.915620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.922938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.932387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.940275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.947175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.955176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.964183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.970822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.998398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:44.007587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:44.016059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:44.067569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:55.311283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:55.318287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:51:21.483122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:51:21.492834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:51:21.508055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:51:21.517121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46166","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T07:51:29.163220Z","caller":"traceutil/trace.go:172","msg":"trace[1011630107] transaction","detail":"{read_only:false; response_revision:1082; number_of_response:1; }","duration":"188.913732ms","start":"2025-12-17T07:51:28.974284Z","end":"2025-12-17T07:51:29.163197Z","steps":["trace[1011630107] 'process raft request'  (duration: 161.062278ms)","trace[1011630107] 'compare'  (duration: 27.732802ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T07:51:29.340183Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.506542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-5kqv5\" limit:1 ","response":"range_response_count:1 size:5034"}
	{"level":"info","ts":"2025-12-17T07:51:29.340297Z","caller":"traceutil/trace.go:172","msg":"trace[740532932] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-5kqv5; range_end:; response_count:1; response_revision:1084; }","duration":"131.613749ms","start":"2025-12-17T07:51:29.208649Z","end":"2025-12-17T07:51:29.340263Z","steps":["trace[740532932] 'agreement among raft nodes before linearized reading'  (duration: 59.72777ms)","trace[740532932] 'range keys from in-memory index tree'  (duration: 71.69161ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T07:51:29.340197Z","caller":"traceutil/trace.go:172","msg":"trace[337888061] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"161.769544ms","start":"2025-12-17T07:51:29.178405Z","end":"2025-12-17T07:51:29.340174Z","steps":["trace[337888061] 'process raft request'  (duration: 90.115816ms)","trace[337888061] 'compare'  (duration: 71.536541ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T07:51:34.614862Z","caller":"traceutil/trace.go:172","msg":"trace[2072376352] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"102.342999ms","start":"2025-12-17T07:51:34.512500Z","end":"2025-12-17T07:51:34.614843Z","steps":["trace[2072376352] 'process raft request'  (duration: 102.218732ms)"],"step_count":1}
	
	
	==> gcp-auth [d56cf4295e0dce78f1f395237edae415a76ad80e86448ce815b1e839fd52858d] <==
	2025/12/17 07:51:48 GCP Auth Webhook started!
	2025/12/17 07:51:54 Ready to marshal response ...
	2025/12/17 07:51:54 Ready to write response ...
	2025/12/17 07:51:54 Ready to marshal response ...
	2025/12/17 07:51:54 Ready to write response ...
	2025/12/17 07:51:54 Ready to marshal response ...
	2025/12/17 07:51:54 Ready to write response ...
	2025/12/17 07:52:09 Ready to marshal response ...
	2025/12/17 07:52:09 Ready to write response ...
	2025/12/17 07:52:13 Ready to marshal response ...
	2025/12/17 07:52:13 Ready to write response ...
	2025/12/17 07:52:14 Ready to marshal response ...
	2025/12/17 07:52:14 Ready to write response ...
	2025/12/17 07:52:14 Ready to marshal response ...
	2025/12/17 07:52:14 Ready to write response ...
	2025/12/17 07:52:19 Ready to marshal response ...
	2025/12/17 07:52:19 Ready to write response ...
	2025/12/17 07:52:24 Ready to marshal response ...
	2025/12/17 07:52:24 Ready to write response ...
	2025/12/17 07:52:33 Ready to marshal response ...
	2025/12/17 07:52:33 Ready to write response ...
	2025/12/17 07:54:33 Ready to marshal response ...
	2025/12/17 07:54:33 Ready to write response ...
	
	
	==> kernel <==
	 07:54:34 up  1:36,  0 user,  load average: 0.30, 1.65, 2.20
	Linux addons-910958 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c] <==
	I1217 07:52:25.597656       1 main.go:301] handling current node
	I1217 07:52:35.598054       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:52:35.598124       1 main.go:301] handling current node
	I1217 07:52:45.600876       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:52:45.600920       1 main.go:301] handling current node
	I1217 07:52:55.597597       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:52:55.597635       1 main.go:301] handling current node
	I1217 07:53:05.597751       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:53:05.597800       1 main.go:301] handling current node
	I1217 07:53:15.602520       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:53:15.602578       1 main.go:301] handling current node
	I1217 07:53:25.600616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:53:25.600652       1 main.go:301] handling current node
	I1217 07:53:35.598366       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:53:35.598420       1 main.go:301] handling current node
	I1217 07:53:45.601329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:53:45.601373       1 main.go:301] handling current node
	I1217 07:53:55.597764       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:53:55.597796       1 main.go:301] handling current node
	I1217 07:54:05.599052       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:54:05.599091       1 main.go:301] handling current node
	I1217 07:54:15.597692       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:54:15.597726       1 main.go:301] handling current node
	I1217 07:54:25.600793       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:54:25.600832       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c] <==
	E1217 07:51:15.965058       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:16.006457       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:16.087675       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:16.248835       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:16.570605       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	W1217 07:51:16.937158       1 handler_proxy.go:99] no RequestInfo found in the context
	W1217 07:51:16.937222       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 07:51:16.937254       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 07:51:16.937255       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1217 07:51:16.937265       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1217 07:51:16.938452       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1217 07:51:17.239560       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1217 07:51:21.483110       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 07:51:21.493629       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 07:51:21.507979       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 07:51:21.517159       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1217 07:52:02.609607       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33520: use of closed network connection
	E1217 07:52:02.767953       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33552: use of closed network connection
	I1217 07:52:09.532688       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 07:52:09.733264       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.225.131"}
	I1217 07:52:26.121515       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1217 07:54:33.362010       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.232.33"}
	
	
	==> kube-controller-manager [d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e] <==
	I1217 07:50:51.464193       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 07:50:51.464208       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 07:50:51.464212       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 07:50:51.464222       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 07:50:51.464229       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 07:50:51.464215       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 07:50:51.464270       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 07:50:51.466612       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 07:50:51.466756       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 07:50:51.466819       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 07:50:51.466867       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 07:50:51.466875       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 07:50:51.466890       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 07:50:51.468268       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 07:50:51.468348       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 07:50:51.474914       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 07:50:51.478808       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-910958" podCIDRs=["10.244.0.0/24"]
	I1217 07:50:51.485270       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 07:50:53.903685       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1217 07:51:06.416246       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1217 07:51:21.474842       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 07:51:21.474923       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 07:51:21.496379       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 07:51:21.575234       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 07:51:21.596584       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15] <==
	I1217 07:50:53.020854       1 server_linux.go:53] "Using iptables proxy"
	I1217 07:50:53.452023       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 07:50:53.554600       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 07:50:53.557505       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 07:50:53.565884       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 07:50:53.670393       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 07:50:53.670463       1 server_linux.go:132] "Using iptables Proxier"
	I1217 07:50:53.749903       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 07:50:53.751576       1 server.go:527] "Version info" version="v1.34.3"
	I1217 07:50:53.751755       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 07:50:53.754365       1 config.go:309] "Starting node config controller"
	I1217 07:50:53.754928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 07:50:53.754946       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 07:50:53.754632       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 07:50:53.754957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 07:50:53.754594       1 config.go:200] "Starting service config controller"
	I1217 07:50:53.754995       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 07:50:53.754622       1 config.go:106] "Starting endpoint slice config controller"
	I1217 07:50:53.755006       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 07:50:53.856199       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 07:50:53.856247       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 07:50:53.856390       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10] <==
	I1217 07:50:45.047836       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 07:50:45.049500       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 07:50:45.049544       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 07:50:45.049763       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 07:50:45.049801       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 07:50:45.051979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 07:50:45.052085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 07:50:45.052167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 07:50:45.052171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 07:50:45.052386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 07:50:45.052412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 07:50:45.052565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 07:50:45.052795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 07:50:45.052841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 07:50:45.052845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 07:50:45.053118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 07:50:45.053190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 07:50:45.053264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 07:50:45.053289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 07:50:45.053391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 07:50:45.053428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 07:50:45.053475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 07:50:45.053519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 07:50:45.053859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1217 07:50:46.150026       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 07:52:41 addons-910958 kubelet[1302]: I1217 07:52:41.228474    1302 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9f2db388-8303-4c35-96b1-b38fd489ee53-gcp-creds\") on node \"addons-910958\" DevicePath \"\""
	Dec 17 07:52:41 addons-910958 kubelet[1302]: I1217 07:52:41.228510    1302 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ltvjq\" (UniqueName: \"kubernetes.io/projected/9f2db388-8303-4c35-96b1-b38fd489ee53-kube-api-access-ltvjq\") on node \"addons-910958\" DevicePath \"\""
	Dec 17 07:52:41 addons-910958 kubelet[1302]: I1217 07:52:41.228599    1302 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0ff77793-1813-4820-953e-c122f1c9a5de\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^583cc98a-db1d-11f0-8f53-6e7ab98b1a49\") on node \"addons-910958\" "
	Dec 17 07:52:41 addons-910958 kubelet[1302]: I1217 07:52:41.233207    1302 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-0ff77793-1813-4820-953e-c122f1c9a5de" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^583cc98a-db1d-11f0-8f53-6e7ab98b1a49") on node "addons-910958"
	Dec 17 07:52:41 addons-910958 kubelet[1302]: I1217 07:52:41.329955    1302 reconciler_common.go:299] "Volume detached for volume \"pvc-0ff77793-1813-4820-953e-c122f1c9a5de\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^583cc98a-db1d-11f0-8f53-6e7ab98b1a49\") on node \"addons-910958\" DevicePath \"\""
	Dec 17 07:52:41 addons-910958 kubelet[1302]: I1217 07:52:41.371210    1302 scope.go:117] "RemoveContainer" containerID="f1ab81126b0d6c10b28e034791ab81441ae0a6600f24ec552ce2c66118430cf0"
	Dec 17 07:52:41 addons-910958 kubelet[1302]: I1217 07:52:41.380337    1302 scope.go:117] "RemoveContainer" containerID="f1ab81126b0d6c10b28e034791ab81441ae0a6600f24ec552ce2c66118430cf0"
	Dec 17 07:52:41 addons-910958 kubelet[1302]: E1217 07:52:41.380825    1302 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1ab81126b0d6c10b28e034791ab81441ae0a6600f24ec552ce2c66118430cf0\": container with ID starting with f1ab81126b0d6c10b28e034791ab81441ae0a6600f24ec552ce2c66118430cf0 not found: ID does not exist" containerID="f1ab81126b0d6c10b28e034791ab81441ae0a6600f24ec552ce2c66118430cf0"
	Dec 17 07:52:41 addons-910958 kubelet[1302]: I1217 07:52:41.380865    1302 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1ab81126b0d6c10b28e034791ab81441ae0a6600f24ec552ce2c66118430cf0"} err="failed to get container status \"f1ab81126b0d6c10b28e034791ab81441ae0a6600f24ec552ce2c66118430cf0\": rpc error: code = NotFound desc = could not find container \"f1ab81126b0d6c10b28e034791ab81441ae0a6600f24ec552ce2c66118430cf0\": container with ID starting with f1ab81126b0d6c10b28e034791ab81441ae0a6600f24ec552ce2c66118430cf0 not found: ID does not exist"
	Dec 17 07:52:42 addons-910958 kubelet[1302]: I1217 07:52:42.776282    1302 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f2db388-8303-4c35-96b1-b38fd489ee53" path="/var/lib/kubelet/pods/9f2db388-8303-4c35-96b1-b38fd489ee53/volumes"
	Dec 17 07:52:46 addons-910958 kubelet[1302]: I1217 07:52:46.765388    1302 scope.go:117] "RemoveContainer" containerID="56da695cee4dd0109227faf78ff5819796ecea63f1f8338fa844f785b836e26e"
	Dec 17 07:52:46 addons-910958 kubelet[1302]: I1217 07:52:46.774069    1302 scope.go:117] "RemoveContainer" containerID="2fa4289a0842d27a479e65f48c50bb61bb6871f4123b9a20b44468c31885102e"
	Dec 17 07:52:46 addons-910958 kubelet[1302]: I1217 07:52:46.783487    1302 scope.go:117] "RemoveContainer" containerID="8f716085f9a81ad30438a5c2219f806b2ed5d9015f18cd2a56e06325c6823349"
	Dec 17 07:52:46 addons-910958 kubelet[1302]: I1217 07:52:46.791402    1302 scope.go:117] "RemoveContainer" containerID="f79ed4ecd1737e80ecb2530359eae10bdcc38b4fd6d43f82ae2227e51b6b13c1"
	Dec 17 07:52:46 addons-910958 kubelet[1302]: I1217 07:52:46.799521    1302 scope.go:117] "RemoveContainer" containerID="178bdf537daef0f30a80aaf431859ab14cc1bd35ae4da245a2e737d716ad7b7b"
	Dec 17 07:52:51 addons-910958 kubelet[1302]: I1217 07:52:51.773021    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-x5kj2" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 07:52:55 addons-910958 kubelet[1302]: I1217 07:52:55.773435    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sq4qp" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 07:52:56 addons-910958 kubelet[1302]: I1217 07:52:56.774856    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-vwl8f" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 07:53:08 addons-910958 kubelet[1302]: E1217 07:53:08.958296    1302 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-brbhv" podUID="235047b9-19f8-440e-9443-a43977c33808"
	Dec 17 07:53:26 addons-910958 kubelet[1302]: I1217 07:53:26.569025    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-brbhv" podStartSLOduration=151.661701705 podStartE2EDuration="2m33.56897709s" podCreationTimestamp="2025-12-17 07:50:53 +0000 UTC" firstStartedPulling="2025-12-17 07:53:23.797641646 +0000 UTC m=+157.112614211" lastFinishedPulling="2025-12-17 07:53:25.704917027 +0000 UTC m=+159.019889596" observedRunningTime="2025-12-17 07:53:26.568809074 +0000 UTC m=+159.883781660" watchObservedRunningTime="2025-12-17 07:53:26.56897709 +0000 UTC m=+159.883949664"
	Dec 17 07:53:54 addons-910958 kubelet[1302]: I1217 07:53:54.773246    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-x5kj2" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 07:53:56 addons-910958 kubelet[1302]: I1217 07:53:56.774109    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sq4qp" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 07:54:14 addons-910958 kubelet[1302]: I1217 07:54:14.773129    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-vwl8f" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 07:54:33 addons-910958 kubelet[1302]: I1217 07:54:33.414594    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/aa9df58c-9ba6-4e49-baad-5f3ed3e63588-gcp-creds\") pod \"hello-world-app-5d498dc89-b82sq\" (UID: \"aa9df58c-9ba6-4e49-baad-5f3ed3e63588\") " pod="default/hello-world-app-5d498dc89-b82sq"
	Dec 17 07:54:33 addons-910958 kubelet[1302]: I1217 07:54:33.414669    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxs4h\" (UniqueName: \"kubernetes.io/projected/aa9df58c-9ba6-4e49-baad-5f3ed3e63588-kube-api-access-fxs4h\") pod \"hello-world-app-5d498dc89-b82sq\" (UID: \"aa9df58c-9ba6-4e49-baad-5f3ed3e63588\") " pod="default/hello-world-app-5d498dc89-b82sq"
	
	
	==> storage-provisioner [b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194] <==
	W1217 07:54:09.280115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:11.283099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:11.287163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:13.289934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:13.294185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:15.297698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:15.303009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:17.306170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:17.310252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:19.313898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:19.319171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:21.322644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:21.326845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:23.330417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:23.335644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:25.338694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:25.344297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:27.348142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:27.354134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:29.357453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:29.362665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:31.365614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:31.371286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:33.375381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:54:33.382480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-910958 -n addons-910958
helpers_test.go:270: (dbg) Run:  kubectl --context addons-910958 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-b82sq ingress-nginx-admission-create-2r822 ingress-nginx-admission-patch-5kqv5
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-910958 describe pod hello-world-app-5d498dc89-b82sq ingress-nginx-admission-create-2r822 ingress-nginx-admission-patch-5kqv5
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-910958 describe pod hello-world-app-5d498dc89-b82sq ingress-nginx-admission-create-2r822 ingress-nginx-admission-patch-5kqv5: exit status 1 (69.177799ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-b82sq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-910958/192.168.49.2
	Start Time:       Wed, 17 Dec 2025 07:54:33 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxs4h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fxs4h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-b82sq to addons-910958
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.443s (1.443s including waiting). Image size: 4944818 bytes.
	  Normal  Created    0s    kubelet            Created container: hello-world-app
	  Normal  Started    0s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2r822" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5kqv5" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-910958 describe pod hello-world-app-5d498dc89-b82sq ingress-nginx-admission-create-2r822 ingress-nginx-admission-patch-5kqv5: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (262.391424ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:54:35.865676  571991 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:54:35.866037  571991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:54:35.866050  571991 out.go:374] Setting ErrFile to fd 2...
	I1217 07:54:35.866056  571991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:54:35.866362  571991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:54:35.866752  571991 mustload.go:66] Loading cluster: addons-910958
	I1217 07:54:35.867226  571991 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:54:35.867254  571991 addons.go:622] checking whether the cluster is paused
	I1217 07:54:35.867385  571991 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:54:35.867402  571991 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:54:35.868020  571991 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:54:35.888953  571991 ssh_runner.go:195] Run: systemctl --version
	I1217 07:54:35.889006  571991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:54:35.908801  571991 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:54:36.002456  571991 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:54:36.002566  571991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:54:36.034457  571991 cri.go:89] found id: "8fe4b4b9b6e2aaf607b9b5cfeed98eb04f4018774869c541f587002bc49b55c1"
	I1217 07:54:36.034489  571991 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:54:36.034494  571991 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:54:36.034498  571991 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:54:36.034501  571991 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:54:36.034505  571991 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:54:36.034507  571991 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:54:36.034510  571991 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:54:36.034513  571991 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:54:36.034523  571991 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:54:36.034527  571991 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:54:36.034542  571991 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:54:36.034547  571991 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:54:36.034552  571991 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:54:36.034557  571991 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:54:36.034573  571991 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:54:36.034580  571991 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:54:36.034587  571991 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:54:36.034591  571991 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:54:36.034596  571991 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:54:36.034600  571991 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:54:36.034605  571991 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:54:36.034607  571991 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:54:36.034610  571991 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:54:36.034613  571991 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:54:36.034615  571991 cri.go:89] found id: ""
	I1217 07:54:36.034675  571991 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:54:36.049652  571991 out.go:203] 
	W1217 07:54:36.051088  571991 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:54:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:54:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:54:36.051125  571991 out.go:285] * 
	* 
	W1217 07:54:36.057157  571991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:54:36.058730  571991 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable ingress --alsologtostderr -v=1: exit status 11 (260.176597ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:54:36.125424  572060 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:54:36.125759  572060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:54:36.125771  572060 out.go:374] Setting ErrFile to fd 2...
	I1217 07:54:36.125775  572060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:54:36.125972  572060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:54:36.126226  572060 mustload.go:66] Loading cluster: addons-910958
	I1217 07:54:36.126590  572060 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:54:36.126608  572060 addons.go:622] checking whether the cluster is paused
	I1217 07:54:36.126690  572060 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:54:36.126709  572060 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:54:36.127083  572060 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:54:36.147013  572060 ssh_runner.go:195] Run: systemctl --version
	I1217 07:54:36.147088  572060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:54:36.169497  572060 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:54:36.263969  572060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:54:36.264065  572060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:54:36.296721  572060 cri.go:89] found id: "8fe4b4b9b6e2aaf607b9b5cfeed98eb04f4018774869c541f587002bc49b55c1"
	I1217 07:54:36.296746  572060 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:54:36.296751  572060 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:54:36.296754  572060 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:54:36.296757  572060 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:54:36.296762  572060 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:54:36.296764  572060 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:54:36.296767  572060 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:54:36.296770  572060 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:54:36.296776  572060 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:54:36.296779  572060 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:54:36.296783  572060 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:54:36.296789  572060 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:54:36.296793  572060 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:54:36.296797  572060 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:54:36.296812  572060 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:54:36.296816  572060 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:54:36.296822  572060 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:54:36.296827  572060 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:54:36.296832  572060 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:54:36.296836  572060 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:54:36.296838  572060 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:54:36.296841  572060 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:54:36.296843  572060 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:54:36.296849  572060 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:54:36.296861  572060 cri.go:89] found id: ""
	I1217 07:54:36.296911  572060 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:54:36.312460  572060 out.go:203] 
	W1217 07:54:36.313736  572060 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:54:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:54:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:54:36.313758  572060 out.go:285] * 
	* 
	W1217 07:54:36.318015  572060 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:54:36.319615  572060 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.04s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-g8wb6" [32765bf5-bcf2-4996-bd4a-0abcc284a6f2] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004545933s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (265.381936ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:15.957155  568200 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:15.957268  568200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:15.957274  568200 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:15.957281  568200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:15.957510  568200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:15.957808  568200 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:15.958140  568200 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:15.958157  568200 addons.go:622] checking whether the cluster is paused
	I1217 07:52:15.958238  568200 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:15.958256  568200 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:15.958687  568200 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:15.977884  568200 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:15.977975  568200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:15.997875  568200 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:16.092848  568200 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:16.092955  568200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:16.127109  568200 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:16.127138  568200 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:16.127144  568200 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:16.127148  568200 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:16.127150  568200 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:16.127154  568200 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:16.127157  568200 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:16.127160  568200 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:16.127162  568200 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:16.127168  568200 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:16.127171  568200 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:16.127173  568200 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:16.127176  568200 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:16.127178  568200 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:16.127202  568200 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:16.127218  568200 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:16.127223  568200 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:16.127243  568200 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:16.127265  568200 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:16.127268  568200 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:16.127274  568200 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:16.127281  568200 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:16.127286  568200 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:16.127290  568200 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:16.127294  568200 cri.go:89] found id: ""
	I1217 07:52:16.127363  568200 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:16.145088  568200 out.go:203] 
	W1217 07:52:16.146714  568200 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:16.146753  568200 out.go:285] * 
	* 
	W1217 07:52:16.151050  568200 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:16.152697  568200 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.194652ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002753338s
addons_test.go:465: (dbg) Run:  kubectl --context addons-910958 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (250.121377ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:08.152417  566974 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:08.152722  566974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:08.152733  566974 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:08.152737  566974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:08.152996  566974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:08.153324  566974 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:08.153749  566974 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:08.153772  566974 addons.go:622] checking whether the cluster is paused
	I1217 07:52:08.153876  566974 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:08.153900  566974 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:08.154371  566974 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:08.172526  566974 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:08.172607  566974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:08.191027  566974 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:08.284619  566974 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:08.284703  566974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:08.315914  566974 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:08.315946  566974 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:08.315950  566974 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:08.315954  566974 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:08.315957  566974 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:08.315961  566974 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:08.315964  566974 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:08.315967  566974 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:08.315970  566974 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:08.315975  566974 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:08.315978  566974 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:08.315980  566974 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:08.315983  566974 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:08.315986  566974 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:08.315989  566974 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:08.316006  566974 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:08.316012  566974 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:08.316020  566974 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:08.316029  566974 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:08.316034  566974 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:08.316039  566974 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:08.316043  566974 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:08.316047  566974 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:08.316050  566974 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:08.316053  566974 cri.go:89] found id: ""
	I1217 07:52:08.316103  566974 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:08.331498  566974 out.go:203] 
	W1217 07:52:08.332824  566974 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:08.332847  566974 out.go:285] * 
	* 
	W1217 07:52:08.336985  566974 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:08.338614  566974 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (24.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 07:52:17.716245  556055 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 07:52:17.719479  556055 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 07:52:17.719508  556055 kapi.go:107] duration metric: took 3.285506ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.297352ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-910958 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-910958 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [90fc3182-aaf7-49fd-bd85-9f64551a32bc] Pending
helpers_test.go:353: "task-pv-pod" [90fc3182-aaf7-49fd-bd85-9f64551a32bc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [90fc3182-aaf7-49fd-bd85-9f64551a32bc] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003869693s
addons_test.go:574: (dbg) Run:  kubectl --context addons-910958 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-910958 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-910958 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-910958 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-910958 delete pod task-pv-pod: (1.14686209s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-910958 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-910958 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-910958 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [9f2db388-8303-4c35-96b1-b38fd489ee53] Pending
helpers_test.go:353: "task-pv-pod-restore" [9f2db388-8303-4c35-96b1-b38fd489ee53] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004379411s
addons_test.go:616: (dbg) Run:  kubectl --context addons-910958 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-910958 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-910958 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (249.570932ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:41.781926  569701 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:41.782232  569701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:41.782244  569701 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:41.782250  569701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:41.782512  569701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:41.782849  569701 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:41.783237  569701 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:41.783258  569701 addons.go:622] checking whether the cluster is paused
	I1217 07:52:41.783356  569701 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:41.783373  569701 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:41.783830  569701 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:41.802422  569701 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:41.802490  569701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:41.821338  569701 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:41.914570  569701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:41.914662  569701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:41.945194  569701 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:41.945220  569701 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:41.945226  569701 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:41.945232  569701 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:41.945237  569701 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:41.945242  569701 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:41.945247  569701 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:41.945251  569701 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:41.945256  569701 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:41.945277  569701 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:41.945286  569701 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:41.945290  569701 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:41.945293  569701 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:41.945296  569701 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:41.945299  569701 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:41.945313  569701 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:41.945322  569701 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:41.945327  569701 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:41.945332  569701 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:41.945335  569701 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:41.945338  569701 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:41.945341  569701 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:41.945344  569701 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:41.945347  569701 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:41.945350  569701 cri.go:89] found id: ""
	I1217 07:52:41.945395  569701 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:41.960364  569701 out.go:203] 
	W1217 07:52:41.961664  569701 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:41.961681  569701 out.go:285] * 
	* 
	W1217 07:52:41.965878  569701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:41.967284  569701 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (249.616444ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:42.029870  569764 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:42.030165  569764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:42.030176  569764 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:42.030180  569764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:42.030414  569764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:42.030726  569764 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:42.031063  569764 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:42.031079  569764 addons.go:622] checking whether the cluster is paused
	I1217 07:52:42.031161  569764 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:42.031174  569764 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:42.031632  569764 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:42.051036  569764 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:42.051110  569764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:42.069785  569764 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:42.162793  569764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:42.162935  569764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:42.193868  569764 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:42.193896  569764 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:42.193904  569764 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:42.193909  569764 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:42.193914  569764 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:42.193919  569764 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:42.193923  569764 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:42.193927  569764 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:42.193931  569764 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:42.193942  569764 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:42.193947  569764 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:42.193952  569764 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:42.193957  569764 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:42.193961  569764 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:42.193967  569764 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:42.193996  569764 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:42.194009  569764 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:42.194016  569764 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:42.194021  569764 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:42.194027  569764 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:42.194034  569764 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:42.194042  569764 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:42.194048  569764 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:42.194056  569764 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:42.194060  569764 cri.go:89] found id: ""
	I1217 07:52:42.194110  569764 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:42.210047  569764 out.go:203] 
	W1217 07:52:42.211464  569764 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:42.211492  569764 out.go:285] * 
	* 
	W1217 07:52:42.215880  569764 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:42.217565  569764 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (24.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-910958 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-910958 --alsologtostderr -v=1: exit status 11 (251.508962ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:03.087622  566148 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:03.087733  566148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:03.087738  566148 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:03.087743  566148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:03.088015  566148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:03.088407  566148 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:03.088842  566148 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:03.088862  566148 addons.go:622] checking whether the cluster is paused
	I1217 07:52:03.088951  566148 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:03.088970  566148 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:03.089552  566148 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:03.108349  566148 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:03.108429  566148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:03.126214  566148 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:03.218623  566148 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:03.218731  566148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:03.248692  566148 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:03.248715  566148 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:03.248718  566148 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:03.248722  566148 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:03.248725  566148 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:03.248731  566148 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:03.248734  566148 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:03.248736  566148 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:03.248739  566148 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:03.248746  566148 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:03.248750  566148 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:03.248753  566148 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:03.248756  566148 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:03.248759  566148 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:03.248762  566148 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:03.248778  566148 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:03.248783  566148 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:03.248788  566148 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:03.248790  566148 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:03.248793  566148 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:03.248799  566148 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:03.248805  566148 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:03.248807  566148 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:03.248810  566148 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:03.248813  566148 cri.go:89] found id: ""
	I1217 07:52:03.248851  566148 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:03.264041  566148 out.go:203] 
	W1217 07:52:03.265665  566148 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:03.265693  566148 out.go:285] * 
	* 
	W1217 07:52:03.269839  566148 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:03.271576  566148 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-910958 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-910958
helpers_test.go:244: (dbg) docker inspect addons-910958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26",
	        "Created": "2025-12-17T07:50:30.93101818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 558558,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T07:50:30.968094981Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26/hostname",
	        "HostsPath": "/var/lib/docker/containers/baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26/hosts",
	        "LogPath": "/var/lib/docker/containers/baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26/baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26-json.log",
	        "Name": "/addons-910958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-910958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-910958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "baf2bab91de7b040a506f2bc3407e1a7da703ddb1c355bf85a451117e9df3c26",
	                "LowerDir": "/var/lib/docker/overlay2/c93f6daccb08def7a4c967da6a223fba8700890d0ad45732c65afaf1eec27ec3-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c93f6daccb08def7a4c967da6a223fba8700890d0ad45732c65afaf1eec27ec3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c93f6daccb08def7a4c967da6a223fba8700890d0ad45732c65afaf1eec27ec3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c93f6daccb08def7a4c967da6a223fba8700890d0ad45732c65afaf1eec27ec3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-910958",
	                "Source": "/var/lib/docker/volumes/addons-910958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-910958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-910958",
	                "name.minikube.sigs.k8s.io": "addons-910958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e92eb2edad02ba7482ca522886cda08a3b3d3d9a073dbda8d59f3204ce839efb",
	            "SandboxKey": "/var/run/docker/netns/e92eb2edad02",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-910958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37b37450991e7ebf3dd0772299b7ae7e43842e4360f2197f6db56f7931547f66",
	                    "EndpointID": "f792db5c7c829b09e4c3877a09c931775c6aba89629b611c029660ab679db13b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "02:c4:26:57:1f:39",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-910958",
	                        "baf2bab91de7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-910958 -n addons-910958
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-910958 logs -n 25: (1.176290127s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-635623 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-635623   │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │ 17 Dec 25 07:49 UTC │
	│ delete  │ -p download-only-635623                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-635623   │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │ 17 Dec 25 07:49 UTC │
	│ start   │ -o=json --download-only -p download-only-284037 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-284037   │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │ 17 Dec 25 07:49 UTC │
	│ delete  │ -p download-only-284037                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-284037   │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │ 17 Dec 25 07:49 UTC │
	│ start   │ -o=json --download-only -p download-only-505037 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                           │ download-only-505037   │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │ 17 Dec 25 07:50 UTC │
	│ delete  │ -p download-only-505037                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-505037   │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │ 17 Dec 25 07:50 UTC │
	│ delete  │ -p download-only-635623                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-635623   │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │ 17 Dec 25 07:50 UTC │
	│ delete  │ -p download-only-284037                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-284037   │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │ 17 Dec 25 07:50 UTC │
	│ delete  │ -p download-only-505037                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-505037   │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │ 17 Dec 25 07:50 UTC │
	│ start   │ --download-only -p download-docker-300295 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-300295 │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │                     │
	│ delete  │ -p download-docker-300295                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-300295 │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │ 17 Dec 25 07:50 UTC │
	│ start   │ --download-only -p binary-mirror-777344 --alsologtostderr --binary-mirror http://127.0.0.1:43583 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-777344   │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │                     │
	│ delete  │ -p binary-mirror-777344                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-777344   │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │ 17 Dec 25 07:50 UTC │
	│ addons  │ disable dashboard -p addons-910958                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-910958          │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │                     │
	│ addons  │ enable dashboard -p addons-910958                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-910958          │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │                     │
	│ start   │ -p addons-910958 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-910958          │ jenkins │ v1.37.0 │ 17 Dec 25 07:50 UTC │ 17 Dec 25 07:51 UTC │
	│ addons  │ addons-910958 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-910958          │ jenkins │ v1.37.0 │ 17 Dec 25 07:51 UTC │                     │
	│ addons  │ addons-910958 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-910958          │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	│ addons  │ enable headlamp -p addons-910958 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-910958          │ jenkins │ v1.37.0 │ 17 Dec 25 07:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 07:50:09
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 07:50:09.715842  557899 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:50:09.715967  557899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:50:09.715981  557899 out.go:374] Setting ErrFile to fd 2...
	I1217 07:50:09.715985  557899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:50:09.716164  557899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:50:09.716778  557899 out.go:368] Setting JSON to false
	I1217 07:50:09.717785  557899 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5555,"bootTime":1765952255,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 07:50:09.717862  557899 start.go:143] virtualization: kvm guest
	I1217 07:50:09.720336  557899 out.go:179] * [addons-910958] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 07:50:09.722077  557899 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 07:50:09.722093  557899 notify.go:221] Checking for updates...
	I1217 07:50:09.725438  557899 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 07:50:09.727265  557899 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 07:50:09.729365  557899 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 07:50:09.731073  557899 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 07:50:09.732951  557899 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 07:50:09.734973  557899 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 07:50:09.760335  557899 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 07:50:09.760454  557899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:50:09.820111  557899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-17 07:50:09.809514658 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:50:09.820219  557899 docker.go:319] overlay module found
	I1217 07:50:09.822350  557899 out.go:179] * Using the docker driver based on user configuration
	I1217 07:50:09.824318  557899 start.go:309] selected driver: docker
	I1217 07:50:09.824367  557899 start.go:927] validating driver "docker" against <nil>
	I1217 07:50:09.824381  557899 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 07:50:09.825045  557899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:50:09.888973  557899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-17 07:50:09.877771707 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:50:09.889135  557899 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 07:50:09.889356  557899 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 07:50:09.891413  557899 out.go:179] * Using Docker driver with root privileges
	I1217 07:50:09.892872  557899 cni.go:84] Creating CNI manager for ""
	I1217 07:50:09.892946  557899 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 07:50:09.892960  557899 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 07:50:09.893027  557899 start.go:353] cluster config:
	{Name:addons-910958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-910958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 07:50:09.894685  557899 out.go:179] * Starting "addons-910958" primary control-plane node in "addons-910958" cluster
	I1217 07:50:09.896084  557899 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 07:50:09.897591  557899 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 07:50:09.899016  557899 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 07:50:09.899058  557899 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 07:50:09.899077  557899 cache.go:65] Caching tarball of preloaded images
	I1217 07:50:09.899085  557899 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 07:50:09.899206  557899 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 07:50:09.899226  557899 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 07:50:09.899661  557899 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/config.json ...
	I1217 07:50:09.899697  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/config.json: {Name:mk796d49a15f21053d007d40367a1b2b7c23560b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:09.917554  557899 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 07:50:09.917781  557899 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 07:50:09.917806  557899 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 07:50:09.917812  557899 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 07:50:09.917821  557899 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 07:50:09.917828  557899 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1217 07:50:23.253947  557899 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1217 07:50:23.253984  557899 cache.go:243] Successfully downloaded all kic artifacts
	I1217 07:50:23.254026  557899 start.go:360] acquireMachinesLock for addons-910958: {Name:mkaedf734c4ba4da4503e198fef98048b1048577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 07:50:23.254132  557899 start.go:364] duration metric: took 87.03µs to acquireMachinesLock for "addons-910958"
	I1217 07:50:23.254158  557899 start.go:93] Provisioning new machine with config: &{Name:addons-910958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-910958 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 07:50:23.254237  557899 start.go:125] createHost starting for "" (driver="docker")
	I1217 07:50:23.256307  557899 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 07:50:23.256636  557899 start.go:159] libmachine.API.Create for "addons-910958" (driver="docker")
	I1217 07:50:23.256672  557899 client.go:173] LocalClient.Create starting
	I1217 07:50:23.256773  557899 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 07:50:23.379071  557899 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 07:50:23.455440  557899 cli_runner.go:164] Run: docker network inspect addons-910958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 07:50:23.473199  557899 cli_runner.go:211] docker network inspect addons-910958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 07:50:23.473421  557899 network_create.go:284] running [docker network inspect addons-910958] to gather additional debugging logs...
	I1217 07:50:23.473542  557899 cli_runner.go:164] Run: docker network inspect addons-910958
	W1217 07:50:23.491227  557899 cli_runner.go:211] docker network inspect addons-910958 returned with exit code 1
	I1217 07:50:23.491274  557899 network_create.go:287] error running [docker network inspect addons-910958]: docker network inspect addons-910958: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-910958 not found
	I1217 07:50:23.491289  557899 network_create.go:289] output of [docker network inspect addons-910958]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-910958 not found
	
	** /stderr **
	I1217 07:50:23.491414  557899 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 07:50:23.510197  557899 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002072900}
	I1217 07:50:23.510260  557899 network_create.go:124] attempt to create docker network addons-910958 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 07:50:23.510357  557899 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-910958 addons-910958
	I1217 07:50:23.561243  557899 network_create.go:108] docker network addons-910958 192.168.49.0/24 created
	I1217 07:50:23.561289  557899 kic.go:121] calculated static IP "192.168.49.2" for the "addons-910958" container
	I1217 07:50:23.561378  557899 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 07:50:23.578783  557899 cli_runner.go:164] Run: docker volume create addons-910958 --label name.minikube.sigs.k8s.io=addons-910958 --label created_by.minikube.sigs.k8s.io=true
	I1217 07:50:23.597215  557899 oci.go:103] Successfully created a docker volume addons-910958
	I1217 07:50:23.597318  557899 cli_runner.go:164] Run: docker run --rm --name addons-910958-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-910958 --entrypoint /usr/bin/test -v addons-910958:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 07:50:27.015458  557899 cli_runner.go:217] Completed: docker run --rm --name addons-910958-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-910958 --entrypoint /usr/bin/test -v addons-910958:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (3.418095293s)
	I1217 07:50:27.015490  557899 oci.go:107] Successfully prepared a docker volume addons-910958
	I1217 07:50:27.015519  557899 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 07:50:27.015528  557899 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 07:50:27.015617  557899 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-910958:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 07:50:30.857629  557899 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-910958:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.841964972s)
	I1217 07:50:30.857675  557899 kic.go:203] duration metric: took 3.842142462s to extract preloaded images to volume ...
	W1217 07:50:30.857776  557899 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 07:50:30.857817  557899 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 07:50:30.857872  557899 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 07:50:30.914826  557899 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-910958 --name addons-910958 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-910958 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-910958 --network addons-910958 --ip 192.168.49.2 --volume addons-910958:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 07:50:31.186277  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Running}}
	I1217 07:50:31.206510  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:31.228029  557899 cli_runner.go:164] Run: docker exec addons-910958 stat /var/lib/dpkg/alternatives/iptables
	I1217 07:50:31.274171  557899 oci.go:144] the created container "addons-910958" has a running status.
	I1217 07:50:31.274229  557899 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519...
	I1217 07:50:31.275771  557899 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 07:50:31.302708  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:31.322985  557899 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 07:50:31.323009  557899 kic_runner.go:114] Args: [docker exec --privileged addons-910958 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 07:50:31.372037  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:31.390404  557899 machine.go:94] provisionDockerMachine start ...
	I1217 07:50:31.390518  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:31.409129  557899 main.go:143] libmachine: Using SSH client type: native
	I1217 07:50:31.409298  557899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33170 <nil> <nil>}
	I1217 07:50:31.409316  557899 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 07:50:31.410099  557899 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55668->127.0.0.1:33170: read: connection reset by peer
	I1217 07:50:34.541086  557899 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-910958
	
	I1217 07:50:34.541120  557899 ubuntu.go:182] provisioning hostname "addons-910958"
	I1217 07:50:34.541279  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:34.561723  557899 main.go:143] libmachine: Using SSH client type: native
	I1217 07:50:34.561828  557899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33170 <nil> <nil>}
	I1217 07:50:34.561840  557899 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-910958 && echo "addons-910958" | sudo tee /etc/hostname
	I1217 07:50:34.698669  557899 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-910958
	
	I1217 07:50:34.698764  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:34.717936  557899 main.go:143] libmachine: Using SSH client type: native
	I1217 07:50:34.718037  557899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33170 <nil> <nil>}
	I1217 07:50:34.718056  557899 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-910958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-910958/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-910958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 07:50:34.849853  557899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 07:50:34.849887  557899 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 07:50:34.849918  557899 ubuntu.go:190] setting up certificates
	I1217 07:50:34.849936  557899 provision.go:84] configureAuth start
	I1217 07:50:34.850006  557899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-910958
	I1217 07:50:34.869138  557899 provision.go:143] copyHostCerts
	I1217 07:50:34.869237  557899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 07:50:34.869383  557899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 07:50:34.869487  557899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 07:50:34.869587  557899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.addons-910958 san=[127.0.0.1 192.168.49.2 addons-910958 localhost minikube]
	I1217 07:50:35.062792  557899 provision.go:177] copyRemoteCerts
	I1217 07:50:35.062870  557899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 07:50:35.062925  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.081341  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:35.175642  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 07:50:35.196282  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 07:50:35.214861  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 07:50:35.233104  557899 provision.go:87] duration metric: took 383.146847ms to configureAuth
	I1217 07:50:35.233137  557899 ubuntu.go:206] setting minikube options for container-runtime
	I1217 07:50:35.233331  557899 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:50:35.233451  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.251636  557899 main.go:143] libmachine: Using SSH client type: native
	I1217 07:50:35.251771  557899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33170 <nil> <nil>}
	I1217 07:50:35.251794  557899 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 07:50:35.532651  557899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 07:50:35.532678  557899 machine.go:97] duration metric: took 4.142236727s to provisionDockerMachine
	I1217 07:50:35.532691  557899 client.go:176] duration metric: took 12.276010318s to LocalClient.Create
	I1217 07:50:35.532717  557899 start.go:167] duration metric: took 12.276081977s to libmachine.API.Create "addons-910958"
	I1217 07:50:35.532727  557899 start.go:293] postStartSetup for "addons-910958" (driver="docker")
	I1217 07:50:35.532741  557899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 07:50:35.532811  557899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 07:50:35.532859  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.551585  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:35.647828  557899 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 07:50:35.651757  557899 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 07:50:35.651783  557899 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 07:50:35.651796  557899 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 07:50:35.651854  557899 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 07:50:35.651879  557899 start.go:296] duration metric: took 119.145399ms for postStartSetup
	I1217 07:50:35.652169  557899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-910958
	I1217 07:50:35.670791  557899 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/config.json ...
	I1217 07:50:35.671099  557899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 07:50:35.671142  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.689155  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:35.779790  557899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 07:50:35.784575  557899 start.go:128] duration metric: took 12.530320153s to createHost
	I1217 07:50:35.784604  557899 start.go:83] releasing machines lock for "addons-910958", held for 12.530459547s
	I1217 07:50:35.784683  557899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-910958
	I1217 07:50:35.803772  557899 ssh_runner.go:195] Run: cat /version.json
	I1217 07:50:35.803825  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.803867  557899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 07:50:35.803954  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:35.822707  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:35.823672  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:35.966991  557899 ssh_runner.go:195] Run: systemctl --version
	I1217 07:50:35.973699  557899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 07:50:36.011200  557899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 07:50:36.016073  557899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 07:50:36.016129  557899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 07:50:36.044794  557899 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 07:50:36.044822  557899 start.go:496] detecting cgroup driver to use...
	I1217 07:50:36.044863  557899 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 07:50:36.044922  557899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 07:50:36.062196  557899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 07:50:36.075282  557899 docker.go:218] disabling cri-docker service (if available) ...
	I1217 07:50:36.075352  557899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 07:50:36.092785  557899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 07:50:36.110922  557899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 07:50:36.195914  557899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 07:50:36.285326  557899 docker.go:234] disabling docker service ...
	I1217 07:50:36.285432  557899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 07:50:36.305597  557899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 07:50:36.319468  557899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 07:50:36.402715  557899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 07:50:36.483204  557899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 07:50:36.496479  557899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 07:50:36.511549  557899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 07:50:36.511650  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.522602  557899 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 07:50:36.522668  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.531918  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.541846  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.551684  557899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 07:50:36.560619  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.569861  557899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.583595  557899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 07:50:36.592682  557899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 07:50:36.600651  557899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 07:50:36.608124  557899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 07:50:36.689794  557899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 07:50:36.832206  557899 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 07:50:36.832292  557899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 07:50:36.836412  557899 start.go:564] Will wait 60s for crictl version
	I1217 07:50:36.836472  557899 ssh_runner.go:195] Run: which crictl
	I1217 07:50:36.840333  557899 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 07:50:36.865737  557899 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 07:50:36.865831  557899 ssh_runner.go:195] Run: crio --version
	I1217 07:50:36.895056  557899 ssh_runner.go:195] Run: crio --version
	I1217 07:50:36.926549  557899 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 07:50:36.928026  557899 cli_runner.go:164] Run: docker network inspect addons-910958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 07:50:36.946500  557899 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 07:50:36.950796  557899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 07:50:36.961170  557899 kubeadm.go:884] updating cluster {Name:addons-910958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-910958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 07:50:36.961374  557899 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 07:50:36.961432  557899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 07:50:36.996134  557899 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 07:50:36.996155  557899 crio.go:433] Images already preloaded, skipping extraction
	I1217 07:50:36.996201  557899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 07:50:37.023161  557899 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 07:50:37.023186  557899 cache_images.go:86] Images are preloaded, skipping loading
	I1217 07:50:37.023195  557899 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 07:50:37.023289  557899 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-910958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-910958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 07:50:37.023373  557899 ssh_runner.go:195] Run: crio config
	I1217 07:50:37.069475  557899 cni.go:84] Creating CNI manager for ""
	I1217 07:50:37.069510  557899 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 07:50:37.069546  557899 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 07:50:37.069578  557899 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-910958 NodeName:addons-910958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 07:50:37.069724  557899 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-910958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 07:50:37.069789  557899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 07:50:37.078239  557899 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 07:50:37.078319  557899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 07:50:37.085947  557899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 07:50:37.099075  557899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 07:50:37.115144  557899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 07:50:37.128419  557899 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 07:50:37.132463  557899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 07:50:37.142744  557899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 07:50:37.223031  557899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 07:50:37.247014  557899 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958 for IP: 192.168.49.2
	I1217 07:50:37.247038  557899 certs.go:195] generating shared ca certs ...
	I1217 07:50:37.247066  557899 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.247208  557899 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 07:50:37.383987  557899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt ...
	I1217 07:50:37.384023  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt: {Name:mk070ca0ba13d83573609cb6f57680e38590740e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.384231  557899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key ...
	I1217 07:50:37.384248  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key: {Name:mk5ab23f07566032aa7d7528721f48743db4e09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.384354  557899 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 07:50:37.418404  557899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt ...
	I1217 07:50:37.418439  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt: {Name:mkf3379bdb5e7e03abf4cc4ccd466bba9355eae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.418639  557899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key ...
	I1217 07:50:37.418656  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key: {Name:mk6bce123259c1725aff073ddde7aa8d2e59fbfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.418769  557899 certs.go:257] generating profile certs ...
	I1217 07:50:37.418847  557899 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.key
	I1217 07:50:37.418871  557899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt with IP's: []
	I1217 07:50:37.475762  557899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt ...
	I1217 07:50:37.475797  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: {Name:mk68ac088f857d3e4471b7d1160c12ca2910613c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.476001  557899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.key ...
	I1217 07:50:37.476019  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.key: {Name:mk969df5b3fb391a002f85454e7b25bd5e33aa53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.476123  557899 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key.cfa0e06f
	I1217 07:50:37.476148  557899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt.cfa0e06f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 07:50:37.494018  557899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt.cfa0e06f ...
	I1217 07:50:37.494056  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt.cfa0e06f: {Name:mkd6ef75bf96733e6906730457c156c43906402b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.494245  557899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key.cfa0e06f ...
	I1217 07:50:37.494275  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key.cfa0e06f: {Name:mk6cdfafb37bbb110790ff2d6990099e317af7e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.494390  557899 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt.cfa0e06f -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt
	I1217 07:50:37.494514  557899 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key.cfa0e06f -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key
	I1217 07:50:37.494623  557899 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.key
	I1217 07:50:37.494652  557899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.crt with IP's: []
	I1217 07:50:37.589144  557899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.crt ...
	I1217 07:50:37.589182  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.crt: {Name:mk82f4f2a1f0c6f13d1d7c55ca4ac295e7f0b821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.589368  557899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.key ...
	I1217 07:50:37.589385  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.key: {Name:mk9c596c0e945185034e52c57a57a2fce10a889b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:37.589591  557899 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 07:50:37.589640  557899 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 07:50:37.589670  557899 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 07:50:37.589696  557899 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 07:50:37.590276  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 07:50:37.609123  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 07:50:37.627235  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 07:50:37.645600  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 07:50:37.663807  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 07:50:37.681797  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 07:50:37.699910  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 07:50:37.719007  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 07:50:37.737623  557899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 07:50:37.758314  557899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 07:50:37.771265  557899 ssh_runner.go:195] Run: openssl version
	I1217 07:50:37.777708  557899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 07:50:37.785443  557899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 07:50:37.796486  557899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 07:50:37.800445  557899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 07:50:37.800508  557899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 07:50:37.834672  557899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 07:50:37.842703  557899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 07:50:37.850361  557899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 07:50:37.854262  557899 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 07:50:37.854316  557899 kubeadm.go:401] StartCluster: {Name:addons-910958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-910958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 07:50:37.854428  557899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:50:37.854486  557899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:50:37.883512  557899 cri.go:89] found id: ""
	I1217 07:50:37.883622  557899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 07:50:37.892001  557899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 07:50:37.900624  557899 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 07:50:37.900718  557899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 07:50:37.909205  557899 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 07:50:37.909224  557899 kubeadm.go:158] found existing configuration files:
	
	I1217 07:50:37.909281  557899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 07:50:37.917288  557899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 07:50:37.917355  557899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 07:50:37.924786  557899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 07:50:37.932711  557899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 07:50:37.932800  557899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 07:50:37.940847  557899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 07:50:37.948736  557899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 07:50:37.948798  557899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 07:50:37.956276  557899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 07:50:37.964592  557899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 07:50:37.964662  557899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 07:50:37.972511  557899 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 07:50:38.038199  557899 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 07:50:38.101234  557899 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 07:50:47.536912  557899 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 07:50:47.536997  557899 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 07:50:47.537116  557899 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 07:50:47.537190  557899 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 07:50:47.537225  557899 kubeadm.go:319] OS: Linux
	I1217 07:50:47.537268  557899 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 07:50:47.537309  557899 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 07:50:47.537352  557899 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 07:50:47.537394  557899 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 07:50:47.537441  557899 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 07:50:47.537482  557899 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 07:50:47.537560  557899 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 07:50:47.537620  557899 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 07:50:47.537715  557899 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 07:50:47.537860  557899 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 07:50:47.537973  557899 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 07:50:47.538030  557899 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 07:50:47.539992  557899 out.go:252]   - Generating certificates and keys ...
	I1217 07:50:47.540064  557899 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 07:50:47.540117  557899 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 07:50:47.540187  557899 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 07:50:47.540255  557899 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 07:50:47.540309  557899 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 07:50:47.540366  557899 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 07:50:47.540411  557899 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 07:50:47.540522  557899 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-910958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 07:50:47.540590  557899 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 07:50:47.540703  557899 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-910958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 07:50:47.540766  557899 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 07:50:47.540858  557899 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 07:50:47.540941  557899 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 07:50:47.540997  557899 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 07:50:47.541043  557899 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 07:50:47.541092  557899 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 07:50:47.541138  557899 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 07:50:47.541199  557899 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 07:50:47.541247  557899 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 07:50:47.541323  557899 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 07:50:47.541379  557899 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 07:50:47.542897  557899 out.go:252]   - Booting up control plane ...
	I1217 07:50:47.543010  557899 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 07:50:47.543074  557899 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 07:50:47.543132  557899 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 07:50:47.543216  557899 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 07:50:47.543294  557899 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 07:50:47.543382  557899 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 07:50:47.543469  557899 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 07:50:47.543516  557899 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 07:50:47.543660  557899 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 07:50:47.543761  557899 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 07:50:47.543819  557899 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.053427ms
	I1217 07:50:47.543902  557899 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 07:50:47.543966  557899 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 07:50:47.544036  557899 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 07:50:47.544098  557899 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 07:50:47.544162  557899 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.609729191s
	I1217 07:50:47.544222  557899 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.431555577s
	I1217 07:50:47.544282  557899 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501367111s
	I1217 07:50:47.544365  557899 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 07:50:47.544474  557899 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 07:50:47.544544  557899 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 07:50:47.544708  557899 kubeadm.go:319] [mark-control-plane] Marking the node addons-910958 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 07:50:47.544755  557899 kubeadm.go:319] [bootstrap-token] Using token: kmd3fl.fb4wvkd0q8yiee8n
	I1217 07:50:47.546513  557899 out.go:252]   - Configuring RBAC rules ...
	I1217 07:50:47.546638  557899 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 07:50:47.546713  557899 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 07:50:47.546853  557899 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 07:50:47.546991  557899 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 07:50:47.547113  557899 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 07:50:47.547199  557899 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 07:50:47.547300  557899 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 07:50:47.547365  557899 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 07:50:47.547410  557899 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 07:50:47.547418  557899 kubeadm.go:319] 
	I1217 07:50:47.547467  557899 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 07:50:47.547475  557899 kubeadm.go:319] 
	I1217 07:50:47.547549  557899 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 07:50:47.547554  557899 kubeadm.go:319] 
	I1217 07:50:47.547574  557899 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 07:50:47.547632  557899 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 07:50:47.547674  557899 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 07:50:47.547683  557899 kubeadm.go:319] 
	I1217 07:50:47.547728  557899 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 07:50:47.547732  557899 kubeadm.go:319] 
	I1217 07:50:47.547776  557899 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 07:50:47.547788  557899 kubeadm.go:319] 
	I1217 07:50:47.547828  557899 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 07:50:47.547892  557899 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 07:50:47.547947  557899 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 07:50:47.547953  557899 kubeadm.go:319] 
	I1217 07:50:47.548017  557899 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 07:50:47.548084  557899 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 07:50:47.548089  557899 kubeadm.go:319] 
	I1217 07:50:47.548174  557899 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kmd3fl.fb4wvkd0q8yiee8n \
	I1217 07:50:47.548285  557899 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 07:50:47.548306  557899 kubeadm.go:319] 	--control-plane 
	I1217 07:50:47.548310  557899 kubeadm.go:319] 
	I1217 07:50:47.548384  557899 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 07:50:47.548396  557899 kubeadm.go:319] 
	I1217 07:50:47.548479  557899 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kmd3fl.fb4wvkd0q8yiee8n \
	I1217 07:50:47.548613  557899 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 07:50:47.548632  557899 cni.go:84] Creating CNI manager for ""
	I1217 07:50:47.548640  557899 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 07:50:47.550177  557899 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 07:50:47.551669  557899 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 07:50:47.556354  557899 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 07:50:47.556378  557899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 07:50:47.570185  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 07:50:47.790979  557899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 07:50:47.791148  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:47.791260  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-910958 minikube.k8s.io/updated_at=2025_12_17T07_50_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=addons-910958 minikube.k8s.io/primary=true
	I1217 07:50:47.805743  557899 ops.go:34] apiserver oom_adj: -16
	I1217 07:50:47.874568  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:48.374606  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:48.874938  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:49.375396  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:49.875177  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:50.374996  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:50.874691  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:51.375266  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:51.875071  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:52.374701  557899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 07:50:52.453413  557899 kubeadm.go:1114] duration metric: took 4.662308084s to wait for elevateKubeSystemPrivileges
	I1217 07:50:52.453457  557899 kubeadm.go:403] duration metric: took 14.59914402s to StartCluster
	I1217 07:50:52.453480  557899 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:52.453643  557899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 07:50:52.454176  557899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:50:52.454401  557899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 07:50:52.454457  557899 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 07:50:52.454566  557899 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 07:50:52.454727  557899 addons.go:70] Setting gcp-auth=true in profile "addons-910958"
	I1217 07:50:52.454743  557899 addons.go:70] Setting cloud-spanner=true in profile "addons-910958"
	I1217 07:50:52.454761  557899 addons.go:239] Setting addon cloud-spanner=true in "addons-910958"
	I1217 07:50:52.454767  557899 mustload.go:66] Loading cluster: addons-910958
	I1217 07:50:52.454778  557899 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-910958"
	I1217 07:50:52.454790  557899 addons.go:70] Setting registry=true in profile "addons-910958"
	I1217 07:50:52.454807  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454817  557899 addons.go:70] Setting volumesnapshots=true in profile "addons-910958"
	I1217 07:50:52.454808  557899 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-910958"
	I1217 07:50:52.454835  557899 addons.go:239] Setting addon registry=true in "addons-910958"
	I1217 07:50:52.454845  557899 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-910958"
	I1217 07:50:52.454847  557899 addons.go:70] Setting inspektor-gadget=true in profile "addons-910958"
	I1217 07:50:52.454858  557899 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-910958"
	I1217 07:50:52.454886  557899 addons.go:239] Setting addon inspektor-gadget=true in "addons-910958"
	I1217 07:50:52.454903  557899 addons.go:70] Setting ingress=true in profile "addons-910958"
	I1217 07:50:52.454911  557899 addons.go:70] Setting ingress-dns=true in profile "addons-910958"
	I1217 07:50:52.454915  557899 addons.go:239] Setting addon ingress=true in "addons-910958"
	I1217 07:50:52.454925  557899 addons.go:239] Setting addon ingress-dns=true in "addons-910958"
	I1217 07:50:52.454930  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454941  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454945  557899 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-910958"
	I1217 07:50:52.454949  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454729  557899 addons.go:70] Setting yakd=true in profile "addons-910958"
	I1217 07:50:52.454967  557899 addons.go:239] Setting addon yakd=true in "addons-910958"
	I1217 07:50:52.454984  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454989  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454800  557899 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-910958"
	I1217 07:50:52.455089  557899 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-910958"
	I1217 07:50:52.455184  557899 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-910958"
	I1217 07:50:52.455213  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.455349  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455401  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455455  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455463  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455566  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455588  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.455825  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.454886  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454942  557899 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:50:52.454809  557899 addons.go:70] Setting volcano=true in profile "addons-910958"
	I1217 07:50:52.456585  557899 addons.go:239] Setting addon volcano=true in "addons-910958"
	I1217 07:50:52.456630  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454896  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454755  557899 addons.go:70] Setting registry-creds=true in profile "addons-910958"
	I1217 07:50:52.457131  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.457147  557899 addons.go:239] Setting addon registry-creds=true in "addons-910958"
	I1217 07:50:52.457187  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.455899  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.454771  557899 addons.go:70] Setting storage-provisioner=true in profile "addons-910958"
	I1217 07:50:52.457671  557899 addons.go:239] Setting addon storage-provisioner=true in "addons-910958"
	I1217 07:50:52.454836  557899 addons.go:239] Setting addon volumesnapshots=true in "addons-910958"
	I1217 07:50:52.454732  557899 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:50:52.457748  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.457707  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454895  557899 addons.go:70] Setting metrics-server=true in profile "addons-910958"
	I1217 07:50:52.457928  557899 addons.go:239] Setting addon metrics-server=true in "addons-910958"
	I1217 07:50:52.457967  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.454949  557899 addons.go:70] Setting default-storageclass=true in profile "addons-910958"
	I1217 07:50:52.458246  557899 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-910958"
	I1217 07:50:52.459263  557899 out.go:179] * Verifying Kubernetes components...
	I1217 07:50:52.464543  557899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 07:50:52.468199  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.468424  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.469076  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.469092  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.469723  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.469786  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.470080  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.470934  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.521974  557899 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 07:50:52.523512  557899 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 07:50:52.526313  557899 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 07:50:52.526426  557899 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 07:50:52.526449  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 07:50:52.526634  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.528636  557899 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 07:50:52.528664  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 07:50:52.528747  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.529803  557899 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 07:50:52.533573  557899 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 07:50:52.533782  557899 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 07:50:52.535991  557899 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 07:50:52.537652  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 07:50:52.537854  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.536216  557899 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 07:50:52.538116  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 07:50:52.538179  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.540019  557899 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 07:50:52.542693  557899 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 07:50:52.542846  557899 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 07:50:52.542858  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 07:50:52.542936  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.546924  557899 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 07:50:52.546954  557899 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 07:50:52.547043  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.552188  557899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 07:50:52.554519  557899 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 07:50:52.554561  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 07:50:52.554667  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.577235  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.581624  557899 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1217 07:50:52.581802  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 07:50:52.583661  557899 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 07:50:52.583688  557899 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 07:50:52.583802  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 07:50:52.583875  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.587491  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 07:50:52.588088  557899 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-910958"
	I1217 07:50:52.588164  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.588726  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:50:52.592082  557899 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 07:50:52.593931  557899 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 07:50:52.593996  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 07:50:52.594079  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.597122  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 07:50:52.598209  557899 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 07:50:52.600092  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 07:50:52.601865  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 07:50:52.602211  557899 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 07:50:52.603319  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 07:50:52.604747  557899 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 07:50:52.604810  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 07:50:52.604902  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.605276  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 07:50:52.605506  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 07:50:52.605523  557899 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 07:50:52.605734  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.610109  557899 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 07:50:52.611791  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 07:50:52.612075  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 07:50:52.612303  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.611957  557899 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 07:50:52.614525  557899 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 07:50:52.614590  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 07:50:52.614488  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.614664  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.638039  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.639233  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.642461  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.650471  557899 addons.go:239] Setting addon default-storageclass=true in "addons-910958"
	I1217 07:50:52.650529  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:50:52.651054  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	W1217 07:50:52.651601  557899 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 07:50:52.654179  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.656820  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.664715  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.680330  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.681226  557899 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 07:50:52.687168  557899 out.go:179]   - Using image docker.io/busybox:stable
	I1217 07:50:52.687776  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.688921  557899 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 07:50:52.688952  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 07:50:52.689025  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.689185  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.689404  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	W1217 07:50:52.694578  557899 sshutil.go:67] dial failure (will retry): ssh: handshake failed: EOF
	I1217 07:50:52.695001  557899 retry.go:31] will retry after 342.723363ms: ssh: handshake failed: EOF
	I1217 07:50:52.698743  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.704141  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.709749  557899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 07:50:52.713362  557899 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 07:50:52.713389  557899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 07:50:52.713459  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:50:52.746589  557899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 07:50:52.758624  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.758959  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:50:52.826018  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 07:50:52.849671  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 07:50:52.870154  557899 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 07:50:52.870184  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 07:50:52.886576  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 07:50:52.886618  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 07:50:52.900480  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 07:50:52.901389  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 07:50:52.902171  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 07:50:52.902301  557899 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 07:50:52.902310  557899 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 07:50:52.902380  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 07:50:52.903066  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 07:50:52.903632  557899 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 07:50:52.903652  557899 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 07:50:52.926256  557899 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 07:50:52.926315  557899 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 07:50:52.935425  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 07:50:52.939331  557899 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 07:50:52.939391  557899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 07:50:52.942256  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 07:50:52.956881  557899 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 07:50:52.956915  557899 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 07:50:52.958893  557899 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 07:50:52.958933  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 07:50:52.964117  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 07:50:52.964167  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 07:50:52.978728  557899 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 07:50:52.978765  557899 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 07:50:53.004156  557899 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 07:50:53.004197  557899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 07:50:53.011982  557899 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 07:50:53.012010  557899 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 07:50:53.036276  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 07:50:53.038726  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 07:50:53.040138  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 07:50:53.040164  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 07:50:53.053820  557899 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 07:50:53.053926  557899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 07:50:53.066932  557899 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 07:50:53.066960  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 07:50:53.106070  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 07:50:53.106186  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 07:50:53.110683  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 07:50:53.121222  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 07:50:53.121318  557899 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 07:50:53.132076  557899 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 07:50:53.134458  557899 node_ready.go:35] waiting up to 6m0s for node "addons-910958" to be "Ready" ...
	I1217 07:50:53.161211  557899 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 07:50:53.161242  557899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 07:50:53.195676  557899 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 07:50:53.195709  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 07:50:53.254244  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 07:50:53.254275  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 07:50:53.287680  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 07:50:53.317298  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 07:50:53.324347  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 07:50:53.324379  557899 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 07:50:53.403377  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 07:50:53.403407  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 07:50:53.443880  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 07:50:53.443909  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 07:50:53.492557  557899 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 07:50:53.492591  557899 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 07:50:53.558990  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 07:50:53.658814  557899 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-910958" context rescaled to 1 replicas
	I1217 07:50:54.275245  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.373036637s)
	I1217 07:50:54.275289  557899 addons.go:495] Verifying addon ingress=true in "addons-910958"
	I1217 07:50:54.275344  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.372248439s)
	I1217 07:50:54.275463  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.333173631s)
	I1217 07:50:54.275391  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.339919427s)
	I1217 07:50:54.275672  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.239357988s)
	I1217 07:50:54.275691  557899 addons.go:495] Verifying addon metrics-server=true in "addons-910958"
	I1217 07:50:54.275718  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.236971878s)
	I1217 07:50:54.275729  557899 addons.go:495] Verifying addon registry=true in "addons-910958"
	I1217 07:50:54.275829  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.165001056s)
	I1217 07:50:54.278348  557899 out.go:179] * Verifying ingress addon...
	I1217 07:50:54.278369  557899 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-910958 service yakd-dashboard -n yakd-dashboard
	
	I1217 07:50:54.278956  557899 out.go:179] * Verifying registry addon...
	I1217 07:50:54.281777  557899 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 07:50:54.282000  557899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 07:50:54.287716  557899 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 07:50:54.289116  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:54.287780  557899 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 07:50:54.289149  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 07:50:54.288968  557899 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1217 07:50:54.748801  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.461065045s)
	W1217 07:50:54.748867  557899 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 07:50:54.748903  557899 retry.go:31] will retry after 316.23395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 07:50:54.748950  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.431621525s)
	I1217 07:50:54.749182  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.190081215s)
	I1217 07:50:54.749200  557899 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-910958"
	I1217 07:50:54.754145  557899 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 07:50:54.757924  557899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 07:50:54.760929  557899 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 07:50:54.760949  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:54.862322  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:54.862400  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:55.065330  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1217 07:50:55.137890  557899 node_ready.go:57] node "addons-910958" has "Ready":"False" status (will retry)
	I1217 07:50:55.262135  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:55.285040  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:55.285720  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:55.761938  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:55.825524  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:55.825716  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:56.262213  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:56.285132  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:56.285371  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:56.761978  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:56.785259  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:56.862624  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 07:50:57.138337  557899 node_ready.go:57] node "addons-910958" has "Ready":"False" status (will retry)
	I1217 07:50:57.262166  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:57.284889  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:57.284938  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:57.565725  557899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.500342168s)
	I1217 07:50:57.762150  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:57.862787  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:57.862958  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:58.263625  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:58.285196  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:58.285403  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:58.762078  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:58.785385  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:50:58.863135  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:59.262184  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:59.284836  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:59.285077  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 07:50:59.638229  557899 node_ready.go:57] node "addons-910958" has "Ready":"False" status (will retry)
	I1217 07:50:59.761748  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:50:59.862040  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:50:59.862124  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:00.185852  557899 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 07:51:00.185924  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:51:00.203928  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:51:00.260791  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:00.285781  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:00.285969  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:00.319009  557899 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 07:51:00.332922  557899 addons.go:239] Setting addon gcp-auth=true in "addons-910958"
	I1217 07:51:00.332994  557899 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:51:00.333369  557899 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:51:00.352669  557899 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 07:51:00.352738  557899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:51:00.370639  557899 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:51:00.463519  557899 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 07:51:00.464687  557899 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 07:51:00.465909  557899 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 07:51:00.465932  557899 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 07:51:00.480154  557899 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 07:51:00.480190  557899 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 07:51:00.494617  557899 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 07:51:00.494653  557899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 07:51:00.508807  557899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 07:51:00.762614  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:00.803815  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:00.803907  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:00.831570  557899 addons.go:495] Verifying addon gcp-auth=true in "addons-910958"
	I1217 07:51:00.833239  557899 out.go:179] * Verifying gcp-auth addon...
	I1217 07:51:00.835207  557899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 07:51:00.863804  557899 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 07:51:00.863827  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:01.262062  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:01.285059  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:01.285315  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:01.339165  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:01.761240  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:01.785361  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:01.785638  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:01.838055  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 07:51:02.138179  557899 node_ready.go:57] node "addons-910958" has "Ready":"False" status (will retry)
	I1217 07:51:02.261995  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:02.284956  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:02.285099  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:02.338929  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:02.761237  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:02.785162  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:02.785306  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:02.838986  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:03.260897  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:03.284723  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:03.284778  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:03.338411  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:03.761251  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:03.785096  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:03.785152  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:03.838936  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:04.261175  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:04.284897  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:04.285128  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:04.339281  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 07:51:04.637607  557899 node_ready.go:57] node "addons-910958" has "Ready":"False" status (will retry)
	I1217 07:51:04.761893  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:04.785972  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:04.786124  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:04.838720  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:05.262185  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:05.285180  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:05.285275  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:05.338782  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:05.761913  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:05.784830  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:05.785048  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:05.838959  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:06.137525  557899 node_ready.go:49] node "addons-910958" is "Ready"
	I1217 07:51:06.137570  557899 node_ready.go:38] duration metric: took 13.003085494s for node "addons-910958" to be "Ready" ...
	I1217 07:51:06.137589  557899 api_server.go:52] waiting for apiserver process to appear ...
	I1217 07:51:06.137647  557899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 07:51:06.151515  557899 api_server.go:72] duration metric: took 13.697011933s to wait for apiserver process to appear ...
	I1217 07:51:06.151571  557899 api_server.go:88] waiting for apiserver healthz status ...
	I1217 07:51:06.151601  557899 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 07:51:06.160399  557899 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 07:51:06.161428  557899 api_server.go:141] control plane version: v1.34.3
	I1217 07:51:06.161472  557899 api_server.go:131] duration metric: took 9.891339ms to wait for apiserver health ...
	I1217 07:51:06.161484  557899 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 07:51:06.165286  557899 system_pods.go:59] 20 kube-system pods found
	I1217 07:51:06.165320  557899 system_pods.go:61] "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 07:51:06.165327  557899 system_pods.go:61] "coredns-66bc5c9577-h9rb2" [d43c68bd-5403-4151-b511-f73845f506c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 07:51:06.165334  557899 system_pods.go:61] "csi-hostpath-attacher-0" [bb679d3d-c4e7-4e0f-9dbb-6d2be3a54c55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 07:51:06.165340  557899 system_pods.go:61] "csi-hostpath-resizer-0" [022f5fa0-146c-439f-ad02-c0f8caed089a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 07:51:06.165347  557899 system_pods.go:61] "csi-hostpathplugin-lmbsr" [6c21ba52-86ac-4b34-852e-79742b1c46e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 07:51:06.165355  557899 system_pods.go:61] "etcd-addons-910958" [9701ae90-486e-44c8-817d-5f05fa8ec294] Running
	I1217 07:51:06.165360  557899 system_pods.go:61] "kindnet-l7fvh" [d17dfd00-cbc1-4274-a202-061aeb1d4fd3] Running
	I1217 07:51:06.165367  557899 system_pods.go:61] "kube-apiserver-addons-910958" [c680b413-6c9c-4939-b58b-96547fad09b8] Running
	I1217 07:51:06.165371  557899 system_pods.go:61] "kube-controller-manager-addons-910958" [1a1e172f-08b9-4d67-923e-a056b8400193] Running
	I1217 07:51:06.165378  557899 system_pods.go:61] "kube-ingress-dns-minikube" [9098ada4-e384-40d9-93d8-45d83abd443b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 07:51:06.165382  557899 system_pods.go:61] "kube-proxy-rpkss" [364e3fac-b7cd-406d-9c77-a89197f547b4] Running
	I1217 07:51:06.165391  557899 system_pods.go:61] "kube-scheduler-addons-910958" [503509dd-54fd-4560-9969-a1e225f6c01b] Running
	I1217 07:51:06.165398  557899 system_pods.go:61] "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 07:51:06.165404  557899 system_pods.go:61] "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 07:51:06.165411  557899 system_pods.go:61] "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 07:51:06.165416  557899 system_pods.go:61] "registry-creds-764b6fb674-brbhv" [235047b9-19f8-440e-9443-a43977c33808] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 07:51:06.165424  557899 system_pods.go:61] "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 07:51:06.165429  557899 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pw726" [bcd57629-25e1-4e08-a9ed-2dc43e8cf336] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.165437  557899 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vkxh6" [bc741d21-8bf6-4117-abd7-988587a875a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.165442  557899 system_pods.go:61] "storage-provisioner" [4e21baf1-e2c1-4324-a832-597d88b47b24] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 07:51:06.165450  557899 system_pods.go:74] duration metric: took 3.959573ms to wait for pod list to return data ...
	I1217 07:51:06.165460  557899 default_sa.go:34] waiting for default service account to be created ...
	I1217 07:51:06.167752  557899 default_sa.go:45] found service account: "default"
	I1217 07:51:06.167770  557899 default_sa.go:55] duration metric: took 2.305215ms for default service account to be created ...
	I1217 07:51:06.167778  557899 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 07:51:06.170735  557899 system_pods.go:86] 20 kube-system pods found
	I1217 07:51:06.170763  557899 system_pods.go:89] "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 07:51:06.170772  557899 system_pods.go:89] "coredns-66bc5c9577-h9rb2" [d43c68bd-5403-4151-b511-f73845f506c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 07:51:06.170779  557899 system_pods.go:89] "csi-hostpath-attacher-0" [bb679d3d-c4e7-4e0f-9dbb-6d2be3a54c55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 07:51:06.170788  557899 system_pods.go:89] "csi-hostpath-resizer-0" [022f5fa0-146c-439f-ad02-c0f8caed089a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 07:51:06.170794  557899 system_pods.go:89] "csi-hostpathplugin-lmbsr" [6c21ba52-86ac-4b34-852e-79742b1c46e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 07:51:06.170800  557899 system_pods.go:89] "etcd-addons-910958" [9701ae90-486e-44c8-817d-5f05fa8ec294] Running
	I1217 07:51:06.170805  557899 system_pods.go:89] "kindnet-l7fvh" [d17dfd00-cbc1-4274-a202-061aeb1d4fd3] Running
	I1217 07:51:06.170811  557899 system_pods.go:89] "kube-apiserver-addons-910958" [c680b413-6c9c-4939-b58b-96547fad09b8] Running
	I1217 07:51:06.170815  557899 system_pods.go:89] "kube-controller-manager-addons-910958" [1a1e172f-08b9-4d67-923e-a056b8400193] Running
	I1217 07:51:06.170823  557899 system_pods.go:89] "kube-ingress-dns-minikube" [9098ada4-e384-40d9-93d8-45d83abd443b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 07:51:06.170826  557899 system_pods.go:89] "kube-proxy-rpkss" [364e3fac-b7cd-406d-9c77-a89197f547b4] Running
	I1217 07:51:06.170830  557899 system_pods.go:89] "kube-scheduler-addons-910958" [503509dd-54fd-4560-9969-a1e225f6c01b] Running
	I1217 07:51:06.170834  557899 system_pods.go:89] "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 07:51:06.170842  557899 system_pods.go:89] "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 07:51:06.170849  557899 system_pods.go:89] "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 07:51:06.170854  557899 system_pods.go:89] "registry-creds-764b6fb674-brbhv" [235047b9-19f8-440e-9443-a43977c33808] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 07:51:06.170863  557899 system_pods.go:89] "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 07:51:06.170870  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw726" [bcd57629-25e1-4e08-a9ed-2dc43e8cf336] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.170876  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkxh6" [bc741d21-8bf6-4117-abd7-988587a875a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.170884  557899 system_pods.go:89] "storage-provisioner" [4e21baf1-e2c1-4324-a832-597d88b47b24] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 07:51:06.170920  557899 retry.go:31] will retry after 279.559466ms: missing components: kube-dns
	I1217 07:51:06.266476  557899 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 07:51:06.266502  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:06.369733  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:06.369746  557899 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 07:51:06.369780  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:06.369748  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:06.471996  557899 system_pods.go:86] 20 kube-system pods found
	I1217 07:51:06.472046  557899 system_pods.go:89] "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 07:51:06.472057  557899 system_pods.go:89] "coredns-66bc5c9577-h9rb2" [d43c68bd-5403-4151-b511-f73845f506c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 07:51:06.472067  557899 system_pods.go:89] "csi-hostpath-attacher-0" [bb679d3d-c4e7-4e0f-9dbb-6d2be3a54c55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 07:51:06.472078  557899 system_pods.go:89] "csi-hostpath-resizer-0" [022f5fa0-146c-439f-ad02-c0f8caed089a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 07:51:06.472088  557899 system_pods.go:89] "csi-hostpathplugin-lmbsr" [6c21ba52-86ac-4b34-852e-79742b1c46e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 07:51:06.472094  557899 system_pods.go:89] "etcd-addons-910958" [9701ae90-486e-44c8-817d-5f05fa8ec294] Running
	I1217 07:51:06.472100  557899 system_pods.go:89] "kindnet-l7fvh" [d17dfd00-cbc1-4274-a202-061aeb1d4fd3] Running
	I1217 07:51:06.472106  557899 system_pods.go:89] "kube-apiserver-addons-910958" [c680b413-6c9c-4939-b58b-96547fad09b8] Running
	I1217 07:51:06.472113  557899 system_pods.go:89] "kube-controller-manager-addons-910958" [1a1e172f-08b9-4d67-923e-a056b8400193] Running
	I1217 07:51:06.472121  557899 system_pods.go:89] "kube-ingress-dns-minikube" [9098ada4-e384-40d9-93d8-45d83abd443b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 07:51:06.472126  557899 system_pods.go:89] "kube-proxy-rpkss" [364e3fac-b7cd-406d-9c77-a89197f547b4] Running
	I1217 07:51:06.472131  557899 system_pods.go:89] "kube-scheduler-addons-910958" [503509dd-54fd-4560-9969-a1e225f6c01b] Running
	I1217 07:51:06.472139  557899 system_pods.go:89] "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 07:51:06.472148  557899 system_pods.go:89] "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 07:51:06.472156  557899 system_pods.go:89] "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 07:51:06.472163  557899 system_pods.go:89] "registry-creds-764b6fb674-brbhv" [235047b9-19f8-440e-9443-a43977c33808] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 07:51:06.472171  557899 system_pods.go:89] "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 07:51:06.472189  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw726" [bcd57629-25e1-4e08-a9ed-2dc43e8cf336] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.472200  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkxh6" [bc741d21-8bf6-4117-abd7-988587a875a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.472207  557899 system_pods.go:89] "storage-provisioner" [4e21baf1-e2c1-4324-a832-597d88b47b24] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 07:51:06.472229  557899 retry.go:31] will retry after 339.447836ms: missing components: kube-dns
	I1217 07:51:06.762160  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:06.784981  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:06.785199  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:06.815938  557899 system_pods.go:86] 20 kube-system pods found
	I1217 07:51:06.815976  557899 system_pods.go:89] "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 07:51:06.815984  557899 system_pods.go:89] "coredns-66bc5c9577-h9rb2" [d43c68bd-5403-4151-b511-f73845f506c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 07:51:06.816006  557899 system_pods.go:89] "csi-hostpath-attacher-0" [bb679d3d-c4e7-4e0f-9dbb-6d2be3a54c55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 07:51:06.816016  557899 system_pods.go:89] "csi-hostpath-resizer-0" [022f5fa0-146c-439f-ad02-c0f8caed089a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 07:51:06.816026  557899 system_pods.go:89] "csi-hostpathplugin-lmbsr" [6c21ba52-86ac-4b34-852e-79742b1c46e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 07:51:06.816033  557899 system_pods.go:89] "etcd-addons-910958" [9701ae90-486e-44c8-817d-5f05fa8ec294] Running
	I1217 07:51:06.816041  557899 system_pods.go:89] "kindnet-l7fvh" [d17dfd00-cbc1-4274-a202-061aeb1d4fd3] Running
	I1217 07:51:06.816051  557899 system_pods.go:89] "kube-apiserver-addons-910958" [c680b413-6c9c-4939-b58b-96547fad09b8] Running
	I1217 07:51:06.816054  557899 system_pods.go:89] "kube-controller-manager-addons-910958" [1a1e172f-08b9-4d67-923e-a056b8400193] Running
	I1217 07:51:06.816061  557899 system_pods.go:89] "kube-ingress-dns-minikube" [9098ada4-e384-40d9-93d8-45d83abd443b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 07:51:06.816065  557899 system_pods.go:89] "kube-proxy-rpkss" [364e3fac-b7cd-406d-9c77-a89197f547b4] Running
	I1217 07:51:06.816071  557899 system_pods.go:89] "kube-scheduler-addons-910958" [503509dd-54fd-4560-9969-a1e225f6c01b] Running
	I1217 07:51:06.816079  557899 system_pods.go:89] "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 07:51:06.816084  557899 system_pods.go:89] "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 07:51:06.816094  557899 system_pods.go:89] "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 07:51:06.816100  557899 system_pods.go:89] "registry-creds-764b6fb674-brbhv" [235047b9-19f8-440e-9443-a43977c33808] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 07:51:06.816109  557899 system_pods.go:89] "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 07:51:06.816120  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw726" [bcd57629-25e1-4e08-a9ed-2dc43e8cf336] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.816131  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkxh6" [bc741d21-8bf6-4117-abd7-988587a875a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:06.816153  557899 system_pods.go:89] "storage-provisioner" [4e21baf1-e2c1-4324-a832-597d88b47b24] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 07:51:06.816176  557899 retry.go:31] will retry after 424.196304ms: missing components: kube-dns
	I1217 07:51:06.838101  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:07.245135  557899 system_pods.go:86] 20 kube-system pods found
	I1217 07:51:07.245178  557899 system_pods.go:89] "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 07:51:07.245186  557899 system_pods.go:89] "coredns-66bc5c9577-h9rb2" [d43c68bd-5403-4151-b511-f73845f506c7] Running
	I1217 07:51:07.245197  557899 system_pods.go:89] "csi-hostpath-attacher-0" [bb679d3d-c4e7-4e0f-9dbb-6d2be3a54c55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 07:51:07.245214  557899 system_pods.go:89] "csi-hostpath-resizer-0" [022f5fa0-146c-439f-ad02-c0f8caed089a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 07:51:07.245223  557899 system_pods.go:89] "csi-hostpathplugin-lmbsr" [6c21ba52-86ac-4b34-852e-79742b1c46e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 07:51:07.245229  557899 system_pods.go:89] "etcd-addons-910958" [9701ae90-486e-44c8-817d-5f05fa8ec294] Running
	I1217 07:51:07.245236  557899 system_pods.go:89] "kindnet-l7fvh" [d17dfd00-cbc1-4274-a202-061aeb1d4fd3] Running
	I1217 07:51:07.245243  557899 system_pods.go:89] "kube-apiserver-addons-910958" [c680b413-6c9c-4939-b58b-96547fad09b8] Running
	I1217 07:51:07.245250  557899 system_pods.go:89] "kube-controller-manager-addons-910958" [1a1e172f-08b9-4d67-923e-a056b8400193] Running
	I1217 07:51:07.245265  557899 system_pods.go:89] "kube-ingress-dns-minikube" [9098ada4-e384-40d9-93d8-45d83abd443b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 07:51:07.245271  557899 system_pods.go:89] "kube-proxy-rpkss" [364e3fac-b7cd-406d-9c77-a89197f547b4] Running
	I1217 07:51:07.245278  557899 system_pods.go:89] "kube-scheduler-addons-910958" [503509dd-54fd-4560-9969-a1e225f6c01b] Running
	I1217 07:51:07.245289  557899 system_pods.go:89] "metrics-server-85b7d694d7-9j26h" [a7d02e5b-d0f3-4305-b427-a0ccaf5bca19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 07:51:07.245299  557899 system_pods.go:89] "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 07:51:07.245322  557899 system_pods.go:89] "registry-6b586f9694-hn4rs" [f70288f1-0d07-428d-ba5d-4c40e7878aa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 07:51:07.245331  557899 system_pods.go:89] "registry-creds-764b6fb674-brbhv" [235047b9-19f8-440e-9443-a43977c33808] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 07:51:07.245349  557899 system_pods.go:89] "registry-proxy-x5kj2" [2ff5d815-ab86-44d4-a194-46657caa0621] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 07:51:07.245364  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pw726" [bcd57629-25e1-4e08-a9ed-2dc43e8cf336] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:07.245377  557899 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkxh6" [bc741d21-8bf6-4117-abd7-988587a875a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 07:51:07.245384  557899 system_pods.go:89] "storage-provisioner" [4e21baf1-e2c1-4324-a832-597d88b47b24] Running
	I1217 07:51:07.245399  557899 system_pods.go:126] duration metric: took 1.077612298s to wait for k8s-apps to be running ...
	I1217 07:51:07.245413  557899 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 07:51:07.245469  557899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 07:51:07.259965  557899 system_svc.go:56] duration metric: took 14.539685ms WaitForService to wait for kubelet
	I1217 07:51:07.259997  557899 kubeadm.go:587] duration metric: took 14.805501605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 07:51:07.260020  557899 node_conditions.go:102] verifying NodePressure condition ...
	I1217 07:51:07.261319  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:07.262615  557899 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 07:51:07.262641  557899 node_conditions.go:123] node cpu capacity is 8
	I1217 07:51:07.262674  557899 node_conditions.go:105] duration metric: took 2.647793ms to run NodePressure ...
	I1217 07:51:07.262697  557899 start.go:242] waiting for startup goroutines ...
	I1217 07:51:07.285508  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:07.285502  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:07.344342  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:07.761770  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:07.785908  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:07.785942  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:07.839983  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:08.261768  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:08.285348  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:08.285377  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:08.337986  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:08.761500  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:08.785207  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:08.785332  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:08.837978  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:09.262075  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:09.284808  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:09.284935  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:09.338720  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:09.762380  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:09.789428  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:09.789497  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:09.838866  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:10.263527  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:10.286332  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:10.286352  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:10.339181  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:10.763166  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:10.785250  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:10.785337  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:10.839683  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:11.265128  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:11.285242  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:11.285263  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:11.339341  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:11.761945  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:11.786065  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:11.786274  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:11.838611  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:12.262148  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:12.285202  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:12.285473  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:12.338143  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:12.762431  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:12.785431  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:12.785497  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:12.838496  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:13.262574  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:13.285717  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:13.285808  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:13.339169  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:13.761858  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:13.785987  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:13.785992  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:13.838954  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:14.261517  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:14.286217  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:14.286392  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:14.339104  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:14.762921  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:14.786339  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:14.786837  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:14.838693  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:15.262458  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:15.285867  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:15.286032  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:15.338960  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:15.763071  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:15.785874  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:15.786910  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:15.838684  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:16.261491  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:16.285380  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:16.285396  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:16.338312  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:16.762401  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:16.785636  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:16.785681  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:16.838420  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:17.262205  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:17.284862  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:17.284901  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:17.338820  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:17.762088  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:17.786218  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:17.786267  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:17.839076  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:18.262239  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:18.285416  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:18.285504  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:18.339206  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:18.762135  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:18.785167  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:18.785974  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:18.838813  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:19.261942  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:19.286240  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:19.286487  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:19.339025  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:19.761187  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:19.785126  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:19.785290  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:19.838986  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:20.261507  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:20.285289  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:20.285452  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:20.339111  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:20.761896  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:20.785863  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:20.785890  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:20.838211  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:21.262433  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:21.285599  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:21.285702  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:21.338649  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:21.762157  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:21.862729  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:21.862851  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:21.863039  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:22.263099  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:22.285029  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:22.285030  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:22.339262  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:22.762274  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:22.785969  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:22.786090  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:22.838822  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:23.261692  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:23.285422  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:23.285549  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:23.338528  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:23.762317  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:23.785367  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:23.785418  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:23.838771  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:24.261273  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:24.284945  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:24.285153  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:24.338701  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:24.761989  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:24.786228  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:24.786421  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:24.839307  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:25.262097  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:25.284764  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:25.284784  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:25.338354  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:25.762991  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:25.785903  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:25.785976  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:25.839041  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:26.262164  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:26.285001  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:26.285072  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:26.339206  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:26.762131  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:26.862264  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:26.862366  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:26.862418  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:27.262369  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:27.363103  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:27.363131  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:27.363296  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:27.761795  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:27.786346  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:27.786368  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:27.839318  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:28.262048  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:28.285997  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:28.286349  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:28.338178  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:28.761864  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:28.785673  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:28.785735  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:28.838315  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:29.342662  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:29.342894  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:29.343075  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:29.343217  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:29.762021  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:29.784919  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:29.785065  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:29.838759  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:30.261859  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:30.285141  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:30.285353  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:30.339107  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:30.762735  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:30.786292  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:30.787359  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:30.840613  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:31.262155  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:31.286078  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:31.286192  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:31.339254  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:31.762347  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:31.784998  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:31.785132  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:31.839185  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:32.261953  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:32.285433  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:32.285510  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:32.338234  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:32.762947  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:32.785775  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:32.785821  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:32.838728  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:33.261417  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:33.285208  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:33.285395  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:33.339270  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:33.761944  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:33.784939  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:33.785037  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:33.838672  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:34.262667  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:34.285923  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:34.286034  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:34.338863  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:34.761792  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:34.862433  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:34.862587  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:34.862617  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:35.262265  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:35.285091  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:35.285141  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:35.338832  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:35.761494  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:35.785355  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:35.785421  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:35.838155  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:36.261991  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:36.284962  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:36.285169  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:36.338853  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:36.762135  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:36.785722  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:36.785857  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:36.838918  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:37.262039  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:37.285200  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:37.285383  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:37.338915  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:37.761473  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:37.785271  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:37.785311  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 07:51:37.861976  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:38.261586  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:38.285492  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:38.285502  557899 kapi.go:107] duration metric: took 44.003500213s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 07:51:38.338345  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:38.762869  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:38.785955  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:38.838952  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:39.262852  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:39.286068  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:39.339315  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:39.762196  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:39.785598  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:39.838770  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:40.262298  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:40.285818  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:40.339025  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:40.764327  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:40.787038  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:40.840028  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:41.263268  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:41.285819  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:41.342841  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:41.762800  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:41.862664  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:41.862734  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:42.262253  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:42.285445  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:42.339172  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:42.762726  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:42.785916  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:42.839042  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:43.262223  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:43.284991  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:43.338980  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:43.762556  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:43.785396  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:43.838193  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:44.261694  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:44.285516  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:44.338191  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:44.762280  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:44.785273  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:44.839454  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:45.262345  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:45.285318  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:45.363289  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:45.762013  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:45.784852  557899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 07:51:45.838969  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:46.261800  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:46.285514  557899 kapi.go:107] duration metric: took 52.003736636s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 07:51:46.338082  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:46.762264  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:46.838822  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:47.262285  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:47.338359  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:47.762090  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:47.839074  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:48.285126  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:48.384013  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:48.762967  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:48.862978  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 07:51:49.261980  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:49.338892  557899 kapi.go:107] duration metric: took 48.503677997s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 07:51:49.341402  557899 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-910958 cluster.
	I1217 07:51:49.343029  557899 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 07:51:49.344663  557899 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 07:51:49.762077  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:50.261904  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:50.762419  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:51.262416  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:51.761694  557899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 07:51:52.262323  557899 kapi.go:107] duration metric: took 57.504400189s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 07:51:52.264767  557899 out.go:179] * Enabled addons: ingress-dns, cloud-spanner, registry-creds, storage-provisioner, amd-gpu-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, nvidia-device-plugin, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1217 07:51:52.265923  557899 addons.go:530] duration metric: took 59.811358989s for enable addons: enabled=[ingress-dns cloud-spanner registry-creds storage-provisioner amd-gpu-device-plugin inspektor-gadget metrics-server yakd default-storageclass nvidia-device-plugin volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1217 07:51:52.265970  557899 start.go:247] waiting for cluster config update ...
	I1217 07:51:52.265990  557899 start.go:256] writing updated cluster config ...
	I1217 07:51:52.266261  557899 ssh_runner.go:195] Run: rm -f paused
	I1217 07:51:52.270482  557899 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 07:51:52.273681  557899 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h9rb2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.277789  557899 pod_ready.go:94] pod "coredns-66bc5c9577-h9rb2" is "Ready"
	I1217 07:51:52.277812  557899 pod_ready.go:86] duration metric: took 4.107297ms for pod "coredns-66bc5c9577-h9rb2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.279867  557899 pod_ready.go:83] waiting for pod "etcd-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.283612  557899 pod_ready.go:94] pod "etcd-addons-910958" is "Ready"
	I1217 07:51:52.283635  557899 pod_ready.go:86] duration metric: took 3.743448ms for pod "etcd-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.285476  557899 pod_ready.go:83] waiting for pod "kube-apiserver-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.289126  557899 pod_ready.go:94] pod "kube-apiserver-addons-910958" is "Ready"
	I1217 07:51:52.289152  557899 pod_ready.go:86] duration metric: took 3.653735ms for pod "kube-apiserver-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.290879  557899 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.674237  557899 pod_ready.go:94] pod "kube-controller-manager-addons-910958" is "Ready"
	I1217 07:51:52.674268  557899 pod_ready.go:86] duration metric: took 383.368833ms for pod "kube-controller-manager-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:52.874759  557899 pod_ready.go:83] waiting for pod "kube-proxy-rpkss" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:53.275056  557899 pod_ready.go:94] pod "kube-proxy-rpkss" is "Ready"
	I1217 07:51:53.275086  557899 pod_ready.go:86] duration metric: took 400.298982ms for pod "kube-proxy-rpkss" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:53.474740  557899 pod_ready.go:83] waiting for pod "kube-scheduler-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:53.875030  557899 pod_ready.go:94] pod "kube-scheduler-addons-910958" is "Ready"
	I1217 07:51:53.875060  557899 pod_ready.go:86] duration metric: took 400.287387ms for pod "kube-scheduler-addons-910958" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 07:51:53.875077  557899 pod_ready.go:40] duration metric: took 1.604557482s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 07:51:53.921305  557899 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 07:51:53.924461  557899 out.go:179] * Done! kubectl is now configured to use "addons-910958" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 07:51:51 addons-910958 crio[772]: time="2025-12-17T07:51:51.621721801Z" level=info msg="Starting container: c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556" id=751e400a-9012-48ee-8e5b-8dd528930448 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 07:51:51 addons-910958 crio[772]: time="2025-12-17T07:51:51.624561186Z" level=info msg="Started container" PID=6212 containerID=c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556 description=kube-system/csi-hostpathplugin-lmbsr/csi-snapshotter id=751e400a-9012-48ee-8e5b-8dd528930448 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6a29c5b608a21758768544b8b45ddb9cf429fc749a695518c3ab29d0c576bee
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.746103821Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1b2e8950-d4ee-4009-9092-d4bb9f4ac491 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.746179376Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.752510393Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7ad4acd0119d3344d6b77cb18b3090f84edb7fee34f3cef95b4b3868d012db25 UID:65e19e6e-a12c-411f-b533-a578e1a367ef NetNS:/var/run/netns/c0b4596b-72e5-4e5c-a918-58d2cb32bd6a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002f48c8}] Aliases:map[]}"
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.752563547Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.763980318Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7ad4acd0119d3344d6b77cb18b3090f84edb7fee34f3cef95b4b3868d012db25 UID:65e19e6e-a12c-411f-b533-a578e1a367ef NetNS:/var/run/netns/c0b4596b-72e5-4e5c-a918-58d2cb32bd6a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002f48c8}] Aliases:map[]}"
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.764120135Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.765029213Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.765861461Z" level=info msg="Ran pod sandbox 7ad4acd0119d3344d6b77cb18b3090f84edb7fee34f3cef95b4b3868d012db25 with infra container: default/busybox/POD" id=1b2e8950-d4ee-4009-9092-d4bb9f4ac491 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.767155833Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=178de71a-0bb1-4c03-b145-ffe6189559a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.767277098Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=178de71a-0bb1-4c03-b145-ffe6189559a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.767323331Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=178de71a-0bb1-4c03-b145-ffe6189559a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.768002196Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0565bc4e-46b9-4aa4-8e77-6116be8e0c18 name=/runtime.v1.ImageService/PullImage
	Dec 17 07:51:54 addons-910958 crio[772]: time="2025-12-17T07:51:54.769893421Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 07:51:56 addons-910958 crio[772]: time="2025-12-17T07:51:56.581090595Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0565bc4e-46b9-4aa4-8e77-6116be8e0c18 name=/runtime.v1.ImageService/PullImage
	Dec 17 07:51:56 addons-910958 crio[772]: time="2025-12-17T07:51:56.581862272Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=81c46b1e-090d-4189-817b-59f9b55d0319 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 07:51:56 addons-910958 crio[772]: time="2025-12-17T07:51:56.58335783Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0d3b6d01-f46a-4906-9548-bac2a1800fc7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 07:51:56 addons-910958 crio[772]: time="2025-12-17T07:51:56.587238309Z" level=info msg="Creating container: default/busybox/busybox" id=a9ca97fe-073c-4678-ad93-8b67c3960778 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 07:51:56 addons-910958 crio[772]: time="2025-12-17T07:51:56.587366972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 07:51:56 addons-910958 crio[772]: time="2025-12-17T07:51:56.592762378Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 07:51:56 addons-910958 crio[772]: time="2025-12-17T07:51:56.59318126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 07:51:56 addons-910958 crio[772]: time="2025-12-17T07:51:56.620832425Z" level=info msg="Created container f9e1daa3443ddcdf612433152759dc0608dc1e579cd43122c6e41c42cfa8ddd0: default/busybox/busybox" id=a9ca97fe-073c-4678-ad93-8b67c3960778 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 07:51:56 addons-910958 crio[772]: time="2025-12-17T07:51:56.621545135Z" level=info msg="Starting container: f9e1daa3443ddcdf612433152759dc0608dc1e579cd43122c6e41c42cfa8ddd0" id=3ea7a732-acf8-4a0c-8615-22656e773ba3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 07:51:56 addons-910958 crio[772]: time="2025-12-17T07:51:56.623487802Z" level=info msg="Started container" PID=6330 containerID=f9e1daa3443ddcdf612433152759dc0608dc1e579cd43122c6e41c42cfa8ddd0 description=default/busybox/busybox id=3ea7a732-acf8-4a0c-8615-22656e773ba3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ad4acd0119d3344d6b77cb18b3090f84edb7fee34f3cef95b4b3868d012db25
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	f9e1daa3443dd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   7ad4acd0119d3       busybox                                     default
	c35c7accdf21b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          12 seconds ago       Running             csi-snapshotter                          0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	2c42903f322a2       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          13 seconds ago       Running             csi-provisioner                          0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	d3dd2ef7e0cca       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            14 seconds ago       Running             liveness-probe                           0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	86a876957c74c       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           15 seconds ago       Running             hostpath                                 0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	d56cf4295e0dc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 16 seconds ago       Running             gcp-auth                                 0                   837e6ac3d76f9       gcp-auth-78565c9fb4-r29vk                   gcp-auth
	eb06595f89410       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             19 seconds ago       Running             controller                               0                   b30b2753d50c4       ingress-nginx-controller-85d4c799dd-tnjvc   ingress-nginx
	178bdf537daef       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             22 seconds ago       Exited              patch                                    2                   d25106b199db7       gcp-auth-certs-patch-c5vh9                  gcp-auth
	321626501fabd       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                22 seconds ago       Running             node-driver-registrar                    0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	5cdb117a00344       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            23 seconds ago       Running             gadget                                   0                   23e2b63486a5f       gadget-g8wb6                                gadget
	7cf58434aeec3       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              26 seconds ago       Running             registry-proxy                           0                   4e200115ba4e6       registry-proxy-x5kj2                        kube-system
	d6ff70f629b0e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      29 seconds ago       Running             volume-snapshot-controller               0                   3cdf6ae83031d       snapshot-controller-7d9fbc56b8-vkxh6        kube-system
	ff2d1c6802978       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   29 seconds ago       Running             csi-external-health-monitor-controller   0                   a6a29c5b608a2       csi-hostpathplugin-lmbsr                    kube-system
	4f418a0d25246       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     30 seconds ago       Running             amd-gpu-device-plugin                    0                   5d07f39830c88       amd-gpu-device-plugin-sq4qp                 kube-system
	79702ad3f3aed       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     32 seconds ago       Running             nvidia-device-plugin-ctr                 0                   491225c035b0c       nvidia-device-plugin-daemonset-vwl8f        kube-system
	ad4f1d71a3d82       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      36 seconds ago       Running             volume-snapshot-controller               0                   b162e20b7a9c8       snapshot-controller-7d9fbc56b8-pw726        kube-system
	7700b77e1b687       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             36 seconds ago       Exited              patch                                    1                   13705ce510ad2       ingress-nginx-admission-patch-5kqv5         ingress-nginx
	56da695cee4dd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   37 seconds ago       Exited              create                                   0                   4ee6f2e7b44ba       gcp-auth-certs-create-5z9p6                 gcp-auth
	4980d741df68d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   37 seconds ago       Exited              create                                   0                   3718fbeb80de4       ingress-nginx-admission-create-2r822        ingress-nginx
	b48f70dc9cbcf       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              38 seconds ago       Running             csi-resizer                              0                   aa87d937a26bb       csi-hostpath-resizer-0                      kube-system
	c2a123dbc0656       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             39 seconds ago       Running             local-path-provisioner                   0                   8c1c18288197d       local-path-provisioner-648f6765c9-78mbr     local-path-storage
	98064f5bc77f6       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              40 seconds ago       Running             yakd                                     0                   75851cf78637d       yakd-dashboard-6654c87f9b-th7hc             yakd-dashboard
	77f92e3cf18c9       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               44 seconds ago       Running             cloud-spanner-emulator                   0                   d4b599a468eb7       cloud-spanner-emulator-5bdddb765-p6tr4      default
	1520830ff9484       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             48 seconds ago       Running             csi-attacher                             0                   05564b9fd9843       csi-hostpath-attacher-0                     kube-system
	b05e54700098d       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        49 seconds ago       Running             metrics-server                           0                   2ac911b7b69d2       metrics-server-85b7d694d7-9j26h             kube-system
	152830796e570       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               50 seconds ago       Running             minikube-ingress-dns                     0                   847726c96b580       kube-ingress-dns-minikube                   kube-system
	4f95ea3dd74c9       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           55 seconds ago       Running             registry                                 0                   a2bf45f97dec3       registry-6b586f9694-hn4rs                   kube-system
	ad02540b0f2f5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             58 seconds ago       Running             coredns                                  0                   63a2754046592       coredns-66bc5c9577-h9rb2                    kube-system
	b6e3631773200       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             58 seconds ago       Running             storage-provisioner                      0                   7b676b1fad4e1       storage-provisioner                         kube-system
	f0f35e9b0c091       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           About a minute ago   Running             kindnet-cni                              0                   af575f948a2d2       kindnet-l7fvh                               kube-system
	08a3cfad5dcdf       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             About a minute ago   Running             kube-proxy                               0                   4ae3efecb62df       kube-proxy-rpkss                            kube-system
	f001874e31dcf       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   6812fc3e6aa5b       etcd-addons-910958                          kube-system
	34a5e7ca13b08       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             About a minute ago   Running             kube-apiserver                           0                   a5b746f1cedd3       kube-apiserver-addons-910958                kube-system
	d4daacfac93d4       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             About a minute ago   Running             kube-controller-manager                  0                   f6c7311e4b628       kube-controller-manager-addons-910958       kube-system
	e3b8c740226a7       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             About a minute ago   Running             kube-scheduler                           0                   25a480f7b7ef9       kube-scheduler-addons-910958                kube-system
	
	
	==> coredns [ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77] <==
	[INFO] 10.244.0.17:38811 - 37720 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128955s
	[INFO] 10.244.0.17:42526 - 23496 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090183s
	[INFO] 10.244.0.17:42526 - 23817 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126619s
	[INFO] 10.244.0.17:36732 - 47120 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000069949s
	[INFO] 10.244.0.17:36732 - 46693 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000043153s
	[INFO] 10.244.0.17:59329 - 6376 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000083268s
	[INFO] 10.244.0.17:59329 - 6070 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000083624s
	[INFO] 10.244.0.17:34814 - 37631 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000063678s
	[INFO] 10.244.0.17:34814 - 37882 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000116702s
	[INFO] 10.244.0.17:40606 - 31834 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098503s
	[INFO] 10.244.0.17:40606 - 31690 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00011443s
	[INFO] 10.244.0.22:58664 - 52038 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188069s
	[INFO] 10.244.0.22:33491 - 3949 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000254379s
	[INFO] 10.244.0.22:42397 - 62122 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016974s
	[INFO] 10.244.0.22:45578 - 3419 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000236668s
	[INFO] 10.244.0.22:40316 - 54777 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000140242s
	[INFO] 10.244.0.22:53481 - 27853 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000205748s
	[INFO] 10.244.0.22:35745 - 38319 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.007037629s
	[INFO] 10.244.0.22:55858 - 18939 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00821733s
	[INFO] 10.244.0.22:51405 - 28679 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005487597s
	[INFO] 10.244.0.22:41953 - 57042 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005936126s
	[INFO] 10.244.0.22:36185 - 46787 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003776326s
	[INFO] 10.244.0.22:46666 - 59586 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004715707s
	[INFO] 10.244.0.22:47386 - 11921 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000985998s
	[INFO] 10.244.0.22:33271 - 37073 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00218119s
	
	
	==> describe nodes <==
	Name:               addons-910958
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-910958
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=addons-910958
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T07_50_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-910958
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-910958"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 07:50:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-910958
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 07:51:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 07:51:58 +0000   Wed, 17 Dec 2025 07:50:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 07:51:58 +0000   Wed, 17 Dec 2025 07:50:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 07:51:58 +0000   Wed, 17 Dec 2025 07:50:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 07:51:58 +0000   Wed, 17 Dec 2025 07:51:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-910958
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                f52214bb-3910-469d-8d45-568e2170d4b7
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5bdddb765-p6tr4       0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  gadget                      gadget-g8wb6                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  gcp-auth                    gcp-auth-78565c9fb4-r29vk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-tnjvc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         70s
	  kube-system                 amd-gpu-device-plugin-sq4qp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 coredns-66bc5c9577-h9rb2                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     72s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 csi-hostpathplugin-lmbsr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 etcd-addons-910958                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         78s
	  kube-system                 kindnet-l7fvh                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      72s
	  kube-system                 kube-apiserver-addons-910958                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-addons-910958        200m (2%)     0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-proxy-rpkss                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-scheduler-addons-910958                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 metrics-server-85b7d694d7-9j26h              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         71s
	  kube-system                 nvidia-device-plugin-daemonset-vwl8f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 registry-6b586f9694-hn4rs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 registry-creds-764b6fb674-brbhv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 registry-proxy-x5kj2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 snapshot-controller-7d9fbc56b8-pw726         0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 snapshot-controller-7d9fbc56b8-vkxh6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  local-path-storage          local-path-provisioner-648f6765c9-78mbr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-th7hc              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 70s                kube-proxy       
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node addons-910958 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node addons-910958 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x8 over 82s)  kubelet          Node addons-910958 status is now: NodeHasSufficientPID
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node addons-910958 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node addons-910958 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s                kubelet          Node addons-910958 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           73s                node-controller  Node addons-910958 event: Registered Node addons-910958 in Controller
	  Normal  NodeReady                59s                kubelet          Node addons-910958 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e b3 fe cd 10 12 08 06
	[  +0.811944] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 5c 19 24 ab 3d 08 06
	[  +0.000422] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 04 17 f0 8d 7e 08 06
	[  +8.879668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 47 76 5d 92 91 08 06
	[  +0.000560] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 59 c5 92 e4 c1 08 06
	[  +7.453315] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a b6 5c b8 94 ee 08 06
	[  +0.000329] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e b3 fe cd 10 12 08 06
	[  +5.665720] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 91 37 97 9f 01 08 06
	[Dec17 07:47] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a4 fc eb e2 b2 08 06
	[ +18.649956] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 c5 b7 1f f7 1d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 a4 fc eb e2 b2 08 06
	[  +0.794503] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 17 bb 9f 9a 4b 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 91 37 97 9f 01 08 06
	
	
	==> etcd [f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54] <==
	{"level":"warn","ts":"2025-12-17T07:50:43.899280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.907255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.915620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.922938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.932387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.940275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.947175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.955176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.964183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.970822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:43.998398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:44.007587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:44.016059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:44.067569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:55.311283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:50:55.318287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:51:21.483122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:51:21.492834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:51:21.508055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T07:51:21.517121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46166","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T07:51:29.163220Z","caller":"traceutil/trace.go:172","msg":"trace[1011630107] transaction","detail":"{read_only:false; response_revision:1082; number_of_response:1; }","duration":"188.913732ms","start":"2025-12-17T07:51:28.974284Z","end":"2025-12-17T07:51:29.163197Z","steps":["trace[1011630107] 'process raft request'  (duration: 161.062278ms)","trace[1011630107] 'compare'  (duration: 27.732802ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T07:51:29.340183Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.506542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-5kqv5\" limit:1 ","response":"range_response_count:1 size:5034"}
	{"level":"info","ts":"2025-12-17T07:51:29.340297Z","caller":"traceutil/trace.go:172","msg":"trace[740532932] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-5kqv5; range_end:; response_count:1; response_revision:1084; }","duration":"131.613749ms","start":"2025-12-17T07:51:29.208649Z","end":"2025-12-17T07:51:29.340263Z","steps":["trace[740532932] 'agreement among raft nodes before linearized reading'  (duration: 59.72777ms)","trace[740532932] 'range keys from in-memory index tree'  (duration: 71.69161ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T07:51:29.340197Z","caller":"traceutil/trace.go:172","msg":"trace[337888061] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"161.769544ms","start":"2025-12-17T07:51:29.178405Z","end":"2025-12-17T07:51:29.340174Z","steps":["trace[337888061] 'process raft request'  (duration: 90.115816ms)","trace[337888061] 'compare'  (duration: 71.536541ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T07:51:34.614862Z","caller":"traceutil/trace.go:172","msg":"trace[2072376352] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"102.342999ms","start":"2025-12-17T07:51:34.512500Z","end":"2025-12-17T07:51:34.614843Z","steps":["trace[2072376352] 'process raft request'  (duration: 102.218732ms)"],"step_count":1}
	
	
	==> gcp-auth [d56cf4295e0dce78f1f395237edae415a76ad80e86448ce815b1e839fd52858d] <==
	2025/12/17 07:51:48 GCP Auth Webhook started!
	2025/12/17 07:51:54 Ready to marshal response ...
	2025/12/17 07:51:54 Ready to write response ...
	2025/12/17 07:51:54 Ready to marshal response ...
	2025/12/17 07:51:54 Ready to write response ...
	2025/12/17 07:51:54 Ready to marshal response ...
	2025/12/17 07:51:54 Ready to write response ...
	
	
	==> kernel <==
	 07:52:04 up  1:34,  0 user,  load average: 1.73, 2.51, 2.53
	Linux addons-910958 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c] <==
	I1217 07:50:55.395273       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1217 07:50:55.395444       1 main.go:148] setting mtu 1500 for CNI 
	I1217 07:50:55.395469       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 07:50:55.395497       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T07:50:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 07:50:55.597520       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 07:50:55.597618       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 07:50:55.597632       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 07:50:55.597851       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 07:50:55.997765       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 07:50:55.997798       1 metrics.go:72] Registering metrics
	I1217 07:50:55.997851       1 controller.go:711] "Syncing nftables rules"
	I1217 07:51:05.598849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:51:05.598913       1 main.go:301] handling current node
	I1217 07:51:15.597645       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:51:15.597767       1 main.go:301] handling current node
	I1217 07:51:25.597732       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:51:25.597795       1 main.go:301] handling current node
	I1217 07:51:35.598172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:51:35.598214       1 main.go:301] handling current node
	I1217 07:51:45.597763       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:51:45.597830       1 main.go:301] handling current node
	I1217 07:51:55.598151       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 07:51:55.598188       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c] <==
	 > logger="UnhandledError"
	E1217 07:51:15.936961       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:15.938701       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:15.943949       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:15.965058       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:16.006457       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:16.087675       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:16.248835       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	E1217 07:51:16.570605       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.193.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.193.8:443: connect: connection refused" logger="UnhandledError"
	W1217 07:51:16.937158       1 handler_proxy.go:99] no RequestInfo found in the context
	W1217 07:51:16.937222       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 07:51:16.937254       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 07:51:16.937255       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1217 07:51:16.937265       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1217 07:51:16.938452       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1217 07:51:17.239560       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1217 07:51:21.483110       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 07:51:21.493629       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 07:51:21.507979       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 07:51:21.517159       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1217 07:52:02.609607       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33520: use of closed network connection
	E1217 07:52:02.767953       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33552: use of closed network connection
	
	
	==> kube-controller-manager [d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e] <==
	I1217 07:50:51.464193       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 07:50:51.464208       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 07:50:51.464212       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 07:50:51.464222       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 07:50:51.464229       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 07:50:51.464215       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 07:50:51.464270       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 07:50:51.466612       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 07:50:51.466756       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 07:50:51.466819       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 07:50:51.466867       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 07:50:51.466875       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 07:50:51.466890       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 07:50:51.468268       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 07:50:51.468348       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 07:50:51.474914       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 07:50:51.478808       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-910958" podCIDRs=["10.244.0.0/24"]
	I1217 07:50:51.485270       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 07:50:53.903685       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1217 07:51:06.416246       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1217 07:51:21.474842       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 07:51:21.474923       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 07:51:21.496379       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 07:51:21.575234       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 07:51:21.596584       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15] <==
	I1217 07:50:53.020854       1 server_linux.go:53] "Using iptables proxy"
	I1217 07:50:53.452023       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 07:50:53.554600       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 07:50:53.557505       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 07:50:53.565884       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 07:50:53.670393       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 07:50:53.670463       1 server_linux.go:132] "Using iptables Proxier"
	I1217 07:50:53.749903       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 07:50:53.751576       1 server.go:527] "Version info" version="v1.34.3"
	I1217 07:50:53.751755       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 07:50:53.754365       1 config.go:309] "Starting node config controller"
	I1217 07:50:53.754928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 07:50:53.754946       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 07:50:53.754632       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 07:50:53.754957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 07:50:53.754594       1 config.go:200] "Starting service config controller"
	I1217 07:50:53.754995       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 07:50:53.754622       1 config.go:106] "Starting endpoint slice config controller"
	I1217 07:50:53.755006       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 07:50:53.856199       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 07:50:53.856247       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 07:50:53.856390       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10] <==
	I1217 07:50:45.047836       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 07:50:45.049500       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 07:50:45.049544       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 07:50:45.049763       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 07:50:45.049801       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 07:50:45.051979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 07:50:45.052085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 07:50:45.052167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 07:50:45.052171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 07:50:45.052386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 07:50:45.052412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 07:50:45.052565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 07:50:45.052795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 07:50:45.052841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 07:50:45.052845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 07:50:45.053118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 07:50:45.053190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 07:50:45.053264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 07:50:45.053289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 07:50:45.053391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 07:50:45.053428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 07:50:45.053475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 07:50:45.053519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 07:50:45.053859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1217 07:50:46.150026       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 07:51:34 addons-910958 kubelet[1302]: I1217 07:51:34.960385    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-sq4qp" podStartSLOduration=2.650739008 podStartE2EDuration="29.960361488s" podCreationTimestamp="2025-12-17 07:51:05 +0000 UTC" firstStartedPulling="2025-12-17 07:51:06.353922916 +0000 UTC m=+19.668895468" lastFinishedPulling="2025-12-17 07:51:33.663545379 +0000 UTC m=+46.978517948" observedRunningTime="2025-12-17 07:51:34.011747107 +0000 UTC m=+47.326719682" watchObservedRunningTime="2025-12-17 07:51:34.960361488 +0000 UTC m=+48.275334062"
	Dec 17 07:51:35 addons-910958 kubelet[1302]: I1217 07:51:35.006893    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sq4qp" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 07:51:35 addons-910958 kubelet[1302]: I1217 07:51:35.015978    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/snapshot-controller-7d9fbc56b8-vkxh6" podStartSLOduration=12.483346386000001 podStartE2EDuration="41.015959255s" podCreationTimestamp="2025-12-17 07:50:54 +0000 UTC" firstStartedPulling="2025-12-17 07:51:06.359634931 +0000 UTC m=+19.674607498" lastFinishedPulling="2025-12-17 07:51:34.892247801 +0000 UTC m=+48.207220367" observedRunningTime="2025-12-17 07:51:35.015050446 +0000 UTC m=+48.330023041" watchObservedRunningTime="2025-12-17 07:51:35.015959255 +0000 UTC m=+48.330931827"
	Dec 17 07:51:37 addons-910958 kubelet[1302]: E1217 07:51:37.788848    1302 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 17 07:51:37 addons-910958 kubelet[1302]: E1217 07:51:37.788960    1302 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/235047b9-19f8-440e-9443-a43977c33808-gcr-creds podName:235047b9-19f8-440e-9443-a43977c33808 nodeName:}" failed. No retries permitted until 2025-12-17 07:52:09.788937381 +0000 UTC m=+83.103909933 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/235047b9-19f8-440e-9443-a43977c33808-gcr-creds") pod "registry-creds-764b6fb674-brbhv" (UID: "235047b9-19f8-440e-9443-a43977c33808") : secret "registry-creds-gcr" not found
	Dec 17 07:51:38 addons-910958 kubelet[1302]: I1217 07:51:38.019552    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-x5kj2" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 07:51:38 addons-910958 kubelet[1302]: I1217 07:51:38.029975    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-x5kj2" podStartSLOduration=1.8833228370000001 podStartE2EDuration="33.029953035s" podCreationTimestamp="2025-12-17 07:51:05 +0000 UTC" firstStartedPulling="2025-12-17 07:51:06.443610999 +0000 UTC m=+19.758583573" lastFinishedPulling="2025-12-17 07:51:37.590241207 +0000 UTC m=+50.905213771" observedRunningTime="2025-12-17 07:51:38.029708424 +0000 UTC m=+51.344681016" watchObservedRunningTime="2025-12-17 07:51:38.029953035 +0000 UTC m=+51.344925609"
	Dec 17 07:51:39 addons-910958 kubelet[1302]: I1217 07:51:39.028224    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-x5kj2" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 07:51:41 addons-910958 kubelet[1302]: I1217 07:51:41.773611    1302 scope.go:117] "RemoveContainer" containerID="f97dac13ae741cba489b350ef2b2a08cdeda15f40c61163e531a5872df62d1b5"
	Dec 17 07:51:42 addons-910958 kubelet[1302]: I1217 07:51:42.050554    1302 scope.go:117] "RemoveContainer" containerID="f97dac13ae741cba489b350ef2b2a08cdeda15f40c61163e531a5872df62d1b5"
	Dec 17 07:51:42 addons-910958 kubelet[1302]: I1217 07:51:42.072288    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-g8wb6" podStartSLOduration=17.309013274 podStartE2EDuration="48.072263626s" podCreationTimestamp="2025-12-17 07:50:54 +0000 UTC" firstStartedPulling="2025-12-17 07:51:10.212037497 +0000 UTC m=+23.527010074" lastFinishedPulling="2025-12-17 07:51:40.975287853 +0000 UTC m=+54.290260426" observedRunningTime="2025-12-17 07:51:42.071767324 +0000 UTC m=+55.386739898" watchObservedRunningTime="2025-12-17 07:51:42.072263626 +0000 UTC m=+55.387236203"
	Dec 17 07:51:43 addons-910958 kubelet[1302]: I1217 07:51:43.643913    1302 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd92l\" (UniqueName: \"kubernetes.io/projected/3f4c7a6b-cb1d-440d-8357-90da94cf5b18-kube-api-access-xd92l\") pod \"3f4c7a6b-cb1d-440d-8357-90da94cf5b18\" (UID: \"3f4c7a6b-cb1d-440d-8357-90da94cf5b18\") "
	Dec 17 07:51:43 addons-910958 kubelet[1302]: I1217 07:51:43.647110    1302 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4c7a6b-cb1d-440d-8357-90da94cf5b18-kube-api-access-xd92l" (OuterVolumeSpecName: "kube-api-access-xd92l") pod "3f4c7a6b-cb1d-440d-8357-90da94cf5b18" (UID: "3f4c7a6b-cb1d-440d-8357-90da94cf5b18"). InnerVolumeSpecName "kube-api-access-xd92l". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 07:51:43 addons-910958 kubelet[1302]: I1217 07:51:43.744786    1302 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xd92l\" (UniqueName: \"kubernetes.io/projected/3f4c7a6b-cb1d-440d-8357-90da94cf5b18-kube-api-access-xd92l\") on node \"addons-910958\" DevicePath \"\""
	Dec 17 07:51:44 addons-910958 kubelet[1302]: I1217 07:51:44.072356    1302 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d25106b199db7e3136f180220963e1db66762b2b406e62f2f3a2489c92e2f401"
	Dec 17 07:51:46 addons-910958 kubelet[1302]: I1217 07:51:46.094434    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-tnjvc" podStartSLOduration=44.925859096 podStartE2EDuration="52.094405885s" podCreationTimestamp="2025-12-17 07:50:54 +0000 UTC" firstStartedPulling="2025-12-17 07:51:38.074418117 +0000 UTC m=+51.389390686" lastFinishedPulling="2025-12-17 07:51:45.242964897 +0000 UTC m=+58.557937475" observedRunningTime="2025-12-17 07:51:46.093281633 +0000 UTC m=+59.408254230" watchObservedRunningTime="2025-12-17 07:51:46.094405885 +0000 UTC m=+59.409378459"
	Dec 17 07:51:49 addons-910958 kubelet[1302]: I1217 07:51:49.109232    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-r29vk" podStartSLOduration=38.867489699 podStartE2EDuration="49.109204502s" podCreationTimestamp="2025-12-17 07:51:00 +0000 UTC" firstStartedPulling="2025-12-17 07:51:38.074667189 +0000 UTC m=+51.389639757" lastFinishedPulling="2025-12-17 07:51:48.316381995 +0000 UTC m=+61.631354560" observedRunningTime="2025-12-17 07:51:49.108349203 +0000 UTC m=+62.423321777" watchObservedRunningTime="2025-12-17 07:51:49.109204502 +0000 UTC m=+62.424177089"
	Dec 17 07:51:49 addons-910958 kubelet[1302]: I1217 07:51:49.818226    1302 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 17 07:51:49 addons-910958 kubelet[1302]: I1217 07:51:49.818272    1302 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 17 07:51:52 addons-910958 kubelet[1302]: I1217 07:51:52.134622    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-lmbsr" podStartSLOduration=1.9102027929999998 podStartE2EDuration="47.134598793s" podCreationTimestamp="2025-12-17 07:51:05 +0000 UTC" firstStartedPulling="2025-12-17 07:51:06.35396826 +0000 UTC m=+19.668940814" lastFinishedPulling="2025-12-17 07:51:51.578364243 +0000 UTC m=+64.893336814" observedRunningTime="2025-12-17 07:51:52.133445374 +0000 UTC m=+65.448417982" watchObservedRunningTime="2025-12-17 07:51:52.134598793 +0000 UTC m=+65.449571367"
	Dec 17 07:51:54 addons-910958 kubelet[1302]: I1217 07:51:54.533290    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxl6d\" (UniqueName: \"kubernetes.io/projected/65e19e6e-a12c-411f-b533-a578e1a367ef-kube-api-access-vxl6d\") pod \"busybox\" (UID: \"65e19e6e-a12c-411f-b533-a578e1a367ef\") " pod="default/busybox"
	Dec 17 07:51:54 addons-910958 kubelet[1302]: I1217 07:51:54.533339    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/65e19e6e-a12c-411f-b533-a578e1a367ef-gcp-creds\") pod \"busybox\" (UID: \"65e19e6e-a12c-411f-b533-a578e1a367ef\") " pod="default/busybox"
	Dec 17 07:51:57 addons-910958 kubelet[1302]: I1217 07:51:57.162477    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.347410296 podStartE2EDuration="3.162453864s" podCreationTimestamp="2025-12-17 07:51:54 +0000 UTC" firstStartedPulling="2025-12-17 07:51:54.767640719 +0000 UTC m=+68.082613273" lastFinishedPulling="2025-12-17 07:51:56.582684268 +0000 UTC m=+69.897656841" observedRunningTime="2025-12-17 07:51:57.161154114 +0000 UTC m=+70.476126689" watchObservedRunningTime="2025-12-17 07:51:57.162453864 +0000 UTC m=+70.477426441"
	Dec 17 07:52:00 addons-910958 kubelet[1302]: I1217 07:52:00.776015    1302 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07ba8ad-8d44-4a1b-9fa3-b540e9f6db59" path="/var/lib/kubelet/pods/b07ba8ad-8d44-4a1b-9fa3-b540e9f6db59/volumes"
	Dec 17 07:52:02 addons-910958 kubelet[1302]: E1217 07:52:02.767926    1302 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37302->127.0.0.1:32999: write tcp 127.0.0.1:37302->127.0.0.1:32999: write: broken pipe
	
	
	==> storage-provisioner [b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194] <==
	W1217 07:51:40.644421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:42.648646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:42.655616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:44.659419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:44.665693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:46.669581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:46.673933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:48.677322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:48.682565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:50.686233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:50.691646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:52.695589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:52.699551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:54.703060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:54.707460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:56.711287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:56.715316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:58.718882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:51:58.724422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:52:00.728007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:52:00.732400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:52:02.736518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:52:02.742685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:52:04.745851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 07:52:04.749878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-910958 -n addons-910958
helpers_test.go:270: (dbg) Run:  kubectl --context addons-910958 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-patch-c5vh9 ingress-nginx-admission-create-2r822 ingress-nginx-admission-patch-5kqv5 registry-creds-764b6fb674-brbhv
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-910958 describe pod gcp-auth-certs-patch-c5vh9 ingress-nginx-admission-create-2r822 ingress-nginx-admission-patch-5kqv5 registry-creds-764b6fb674-brbhv
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-910958 describe pod gcp-auth-certs-patch-c5vh9 ingress-nginx-admission-create-2r822 ingress-nginx-admission-patch-5kqv5 registry-creds-764b6fb674-brbhv: exit status 1 (66.453835ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-c5vh9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-2r822" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5kqv5" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-brbhv" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-910958 describe pod gcp-auth-certs-patch-c5vh9 ingress-nginx-admission-create-2r822 ingress-nginx-admission-patch-5kqv5 registry-creds-764b6fb674-brbhv: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable headlamp --alsologtostderr -v=1: exit status 11 (248.911618ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:05.435129  566872 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:05.435419  566872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:05.435432  566872 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:05.435439  566872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:05.435676  566872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:05.435975  566872 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:05.436352  566872 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:05.436375  566872 addons.go:622] checking whether the cluster is paused
	I1217 07:52:05.436479  566872 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:05.436504  566872 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:05.436909  566872 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:05.456154  566872 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:05.456254  566872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:05.476579  566872 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:05.569700  566872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:05.569787  566872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:05.599898  566872 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:05.599923  566872 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:05.599932  566872 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:05.599945  566872 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:05.599950  566872 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:05.599955  566872 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:05.599959  566872 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:05.599963  566872 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:05.599968  566872 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:05.599976  566872 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:05.599984  566872 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:05.599986  566872 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:05.599989  566872 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:05.599992  566872 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:05.599995  566872 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:05.600003  566872 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:05.600006  566872 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:05.600009  566872 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:05.600012  566872 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:05.600015  566872 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:05.600018  566872 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:05.600021  566872 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:05.600024  566872 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:05.600026  566872 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:05.600029  566872 cri.go:89] found id: ""
	I1217 07:52:05.600066  566872 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:05.615074  566872 out.go:203] 
	W1217 07:52:05.616368  566872 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:05.616385  566872 out.go:285] * 
	* 
	W1217 07:52:05.620497  566872 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:05.622008  566872 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-p6tr4" [d65509f5-2c07-45b0-a972-566222ec18c9] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003358118s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (247.474097ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:21.224610  568803 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:21.224903  568803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:21.224908  568803 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:21.224912  568803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:21.225121  568803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:21.225374  568803 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:21.225705  568803 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:21.225722  568803 addons.go:622] checking whether the cluster is paused
	I1217 07:52:21.225799  568803 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:21.225816  568803 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:21.226202  568803 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:21.243955  568803 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:21.244015  568803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:21.262520  568803 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:21.355579  568803 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:21.355671  568803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:21.387121  568803 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:21.387154  568803 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:21.387158  568803 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:21.387161  568803 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:21.387165  568803 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:21.387169  568803 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:21.387171  568803 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:21.387174  568803 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:21.387177  568803 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:21.387192  568803 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:21.387195  568803 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:21.387198  568803 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:21.387200  568803 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:21.387203  568803 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:21.387206  568803 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:21.387213  568803 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:21.387216  568803 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:21.387219  568803 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:21.387222  568803 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:21.387225  568803 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:21.387230  568803 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:21.387233  568803 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:21.387235  568803 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:21.387238  568803 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:21.387241  568803 cri.go:89] found id: ""
	I1217 07:52:21.387300  568803 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:21.403033  568803 out.go:203] 
	W1217 07:52:21.404328  568803 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:21.404348  568803 out.go:285] * 
	* 
	W1217 07:52:21.408405  568803 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:21.409783  568803 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-910958 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-910958 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910958 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [39c96194-5bb6-4ab9-b213-16abfda089ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [39c96194-5bb6-4ab9-b213-16abfda089ef] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [39c96194-5bb6-4ab9-b213-16abfda089ef] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004081801s
addons_test.go:969: (dbg) Run:  kubectl --context addons-910958 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 ssh "cat /opt/local-path-provisioner/pvc-fcb743b3-de9d-497c-9368-b22d621e1e69_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-910958 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-910958 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (253.529641ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:24.543809  569134 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:24.543951  569134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:24.543963  569134 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:24.543967  569134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:24.544170  569134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:24.544467  569134 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:24.544872  569134 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:24.544894  569134 addons.go:622] checking whether the cluster is paused
	I1217 07:52:24.545046  569134 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:24.545068  569134 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:24.545496  569134 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:24.564443  569134 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:24.564497  569134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:24.582963  569134 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:24.676804  569134 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:24.676928  569134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:24.707742  569134 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:24.707766  569134 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:24.707771  569134 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:24.707774  569134 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:24.707777  569134 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:24.707780  569134 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:24.707784  569134 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:24.707786  569134 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:24.707789  569134 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:24.707797  569134 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:24.707800  569134 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:24.707803  569134 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:24.707806  569134 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:24.707809  569134 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:24.707811  569134 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:24.707830  569134 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:24.707837  569134 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:24.707842  569134 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:24.707847  569134 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:24.707850  569134 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:24.707856  569134 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:24.707861  569134 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:24.707864  569134 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:24.707866  569134 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:24.707869  569134 cri.go:89] found id: ""
	I1217 07:52:24.707916  569134 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:24.723760  569134 out.go:203] 
	W1217 07:52:24.725391  569134 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:24.725420  569134 out.go:285] * 
	* 
	W1217 07:52:24.729866  569134 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:24.731996  569134 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-vwl8f" [fd39a806-a500-42ea-80ba-0674f5b2dad3] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003706686s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (251.389615ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:09.088880  567036 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:09.089127  567036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:09.089136  567036 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:09.089140  567036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:09.089311  567036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:09.089663  567036 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:09.089988  567036 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:09.090009  567036 addons.go:622] checking whether the cluster is paused
	I1217 07:52:09.090086  567036 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:09.090103  567036 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:09.090517  567036 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:09.110060  567036 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:09.110120  567036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:09.128726  567036 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:09.221522  567036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:09.221635  567036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:09.253622  567036 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:09.253651  567036 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:09.253655  567036 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:09.253662  567036 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:09.253665  567036 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:09.253669  567036 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:09.253672  567036 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:09.253675  567036 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:09.253678  567036 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:09.253697  567036 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:09.253702  567036 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:09.253705  567036 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:09.253708  567036 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:09.253711  567036 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:09.253714  567036 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:09.253726  567036 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:09.253730  567036 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:09.253735  567036 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:09.253741  567036 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:09.253744  567036 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:09.253747  567036 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:09.253752  567036 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:09.253755  567036 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:09.253758  567036 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:09.253763  567036 cri.go:89] found id: ""
	I1217 07:52:09.253814  567036 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:09.268910  567036 out.go:203] 
	W1217 07:52:09.270572  567036 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:09.270602  567036 out.go:285] * 
	* 
	W1217 07:52:09.274840  567036 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:09.276476  567036 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-th7hc" [858564de-d1c2-4122-abfb-d7bfb3680769] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003602406s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable yakd --alsologtostderr -v=1: exit status 11 (252.910563ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:14.408411  568036 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:14.408569  568036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:14.408581  568036 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:14.408586  568036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:14.408799  568036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:14.409116  568036 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:14.409486  568036 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:14.409505  568036 addons.go:622] checking whether the cluster is paused
	I1217 07:52:14.409623  568036 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:14.409650  568036 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:14.410129  568036 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:14.428565  568036 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:14.428641  568036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:14.446605  568036 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:14.540846  568036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:14.540925  568036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:14.573867  568036 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:14.573890  568036 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:14.573893  568036 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:14.573897  568036 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:14.573899  568036 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:14.573902  568036 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:14.573905  568036 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:14.573908  568036 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:14.573911  568036 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:14.573916  568036 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:14.573921  568036 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:14.573924  568036 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:14.573927  568036 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:14.573929  568036 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:14.573932  568036 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:14.573945  568036 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:14.573950  568036 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:14.573954  568036 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:14.573956  568036 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:14.573959  568036 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:14.573965  568036 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:14.573967  568036 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:14.573970  568036 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:14.573973  568036 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:14.573976  568036 cri.go:89] found id: ""
	I1217 07:52:14.574025  568036 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:14.589479  568036 out.go:203] 
	W1217 07:52:14.591000  568036 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:14.591036  568036 out.go:285] * 
	* 
	W1217 07:52:14.595311  568036 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:14.596966  568036 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-sq4qp" [6a2d5958-f154-4067-a1db-57ec5b9dd19f] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003076239s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910958 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910958 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (255.498429ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:52:10.692795  567436 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:52:10.693056  567436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:10.693065  567436 out.go:374] Setting ErrFile to fd 2...
	I1217 07:52:10.693069  567436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:52:10.693261  567436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:52:10.693625  567436 mustload.go:66] Loading cluster: addons-910958
	I1217 07:52:10.693955  567436 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:10.693972  567436 addons.go:622] checking whether the cluster is paused
	I1217 07:52:10.694049  567436 config.go:182] Loaded profile config "addons-910958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:52:10.694068  567436 host.go:66] Checking if "addons-910958" exists ...
	I1217 07:52:10.694449  567436 cli_runner.go:164] Run: docker container inspect addons-910958 --format={{.State.Status}}
	I1217 07:52:10.714243  567436 ssh_runner.go:195] Run: systemctl --version
	I1217 07:52:10.714293  567436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910958
	I1217 07:52:10.733297  567436 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/addons-910958/id_ed25519 Username:docker}
	I1217 07:52:10.827604  567436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 07:52:10.827682  567436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 07:52:10.858122  567436 cri.go:89] found id: "c35c7accdf21bbb08b3d47832ee83eb64360bf101791a22fe2ec1af036db6556"
	I1217 07:52:10.858147  567436 cri.go:89] found id: "2c42903f322a261ccd9a9d80996f67f255e5faadb76fd56219955aac62b0f518"
	I1217 07:52:10.858153  567436 cri.go:89] found id: "d3dd2ef7e0cca0a478b737f2f81c87cfdd1068552efc2b5e93fe2fea53ab7158"
	I1217 07:52:10.858158  567436 cri.go:89] found id: "86a876957c74cf08977617e5e53cfa62f3475d1000953b9391a993284c655301"
	I1217 07:52:10.858163  567436 cri.go:89] found id: "321626501fabd87eed511644c4cb12c8d695493e0a53949f210426b7f065419e"
	I1217 07:52:10.858168  567436 cri.go:89] found id: "7cf58434aeec39a76cb0e3cd658af2d9d7d1ebaddac9dd7eb1cfa1b31c31cbe1"
	I1217 07:52:10.858173  567436 cri.go:89] found id: "d6ff70f629b0ef35c68c1ea4175c9672d0d1c96623f44bebce5b4b52d99a3c4e"
	I1217 07:52:10.858178  567436 cri.go:89] found id: "ff2d1c68029785be60bdfa923383a757424bc7bfcb14575fbfe5377e4009dd2e"
	I1217 07:52:10.858183  567436 cri.go:89] found id: "4f418a0d252469fbc11ce27c34bc8b7ed30e1a26a0254a49fb06a3340e687375"
	I1217 07:52:10.858193  567436 cri.go:89] found id: "79702ad3f3aedc47ebe990568ffcbf233c96faec8a41a284b41491450f39c927"
	I1217 07:52:10.858202  567436 cri.go:89] found id: "ad4f1d71a3d828cc917e975d6018deebe77b3e20e8821f110f8c118bed68858a"
	I1217 07:52:10.858206  567436 cri.go:89] found id: "b48f70dc9cbcf08578eb415ba0f996ebcd68d3271c2686dbcf051f1beb20e2fb"
	I1217 07:52:10.858210  567436 cri.go:89] found id: "1520830ff9484bbf7ff9803d4a8bb462209c3c3f18a084fe880c27ce9f4a2dfb"
	I1217 07:52:10.858213  567436 cri.go:89] found id: "b05e54700098d168d0bb1eec86354649c8e92cc3cbb1e6d98a9583627b36ac7f"
	I1217 07:52:10.858216  567436 cri.go:89] found id: "152830796e570f878216983e24a2af209b2f5ff6b18c82f0ea75421bfe5af485"
	I1217 07:52:10.858229  567436 cri.go:89] found id: "4f95ea3dd74c9f0539dc713b2f8fc75f6ebef6f081bb0596f71a63cfa29f9629"
	I1217 07:52:10.858232  567436 cri.go:89] found id: "ad02540b0f2f5b2fa9b004f23e0f84a34b3275cef39898f6d43fa25bff222a77"
	I1217 07:52:10.858236  567436 cri.go:89] found id: "b6e3631773200fbc01cd0202ce4768d0d88629d3358c71769d219d4d4d679194"
	I1217 07:52:10.858242  567436 cri.go:89] found id: "f0f35e9b0c091e2e242e4571120a01e3e466e61aec5b549d869f3e2ff61c936c"
	I1217 07:52:10.858245  567436 cri.go:89] found id: "08a3cfad5dcdf0137fefbee2104b5227ec689c3bb4ac29982b92c272c124ad15"
	I1217 07:52:10.858250  567436 cri.go:89] found id: "f001874e31dcf9bd1f6b1c664b9b8b4f2e52a626db5bfb8dfb7c5d781c109f54"
	I1217 07:52:10.858255  567436 cri.go:89] found id: "34a5e7ca13b08e3abed86f539ff17e28d11e13f568f75b9e4f67ebc76fbb629c"
	I1217 07:52:10.858258  567436 cri.go:89] found id: "d4daacfac93d4d2be2c4d621b8680034f5d451fd28ae1fdb0675e53819a7f56e"
	I1217 07:52:10.858261  567436 cri.go:89] found id: "e3b8c740226a7c2bb32cfb16f6ed5b20b59d45ecd9d6bc7ade378e26c2b84e10"
	I1217 07:52:10.858270  567436 cri.go:89] found id: ""
	I1217 07:52:10.858311  567436 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 07:52:10.874163  567436 out.go:203] 
	W1217 07:52:10.875681  567436 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:52:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 07:52:10.875704  567436 out.go:285] * 
	* 
	W1217 07:52:10.879887  567436 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 07:52:10.881481  567436 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-910958 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image load --daemon kicbase/echo-server:functional-981680 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-981680 image load --daemon kicbase/echo-server:functional-981680 --alsologtostderr: (3.267125073s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-981680 image ls: (2.277618609s)
functional_test.go:461: expected "kicbase/echo-server:functional-981680" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (5.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (1.565544742s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-819971
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image load --daemon kicbase/echo-server:functional-819971 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 image load --daemon kicbase/echo-server:functional-819971 --alsologtostderr: (1.264457774s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 image ls: (2.278663928s)
functional_test.go:461: expected "kicbase/echo-server:functional-819971" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (5.14s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-853267 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-853267 --output=json --user=testUser: exit status 80 (1.719213017s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"360d93c3-3630-4997-bf9e-0c3c98434869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-853267 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"bdc5d677-f02b-4c18-b6c0-b91f51eaa2cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T08:10:26Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"ad135a56-f718-4176-a9fd-3e3fa01eb157","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-853267 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.95s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-853267 --output=json --user=testUser
E1217 08:10:27.226921  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-853267 --output=json --user=testUser: exit status 80 (1.948477242s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4f1bc79e-19a9-4c54-bc2d-bf3b6abf3665","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-853267 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"aa51d33f-f4f4-4b3b-8471-65180e34f35b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T08:10:28Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"a7e0f010-d15d-4f7e-b369-28a1d94e7ad1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-853267 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.95s)

                                                
                                    
x
+
TestPause/serial/Pause (6.13s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-262039 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-262039 --alsologtostderr -v=5: exit status 80 (2.397418206s)

                                                
                                                
-- stdout --
	* Pausing node pause-262039 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:24:41.433283  756877 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:24:41.433393  756877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:24:41.433400  756877 out.go:374] Setting ErrFile to fd 2...
	I1217 08:24:41.433405  756877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:24:41.433651  756877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:24:41.433987  756877 out.go:368] Setting JSON to false
	I1217 08:24:41.434015  756877 mustload.go:66] Loading cluster: pause-262039
	I1217 08:24:41.434432  756877 config.go:182] Loaded profile config "pause-262039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:24:41.434860  756877 cli_runner.go:164] Run: docker container inspect pause-262039 --format={{.State.Status}}
	I1217 08:24:41.452838  756877 host.go:66] Checking if "pause-262039" exists ...
	I1217 08:24:41.453111  756877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:24:41.516524  756877 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:false NGoroutines:72 SystemTime:2025-12-17 08:24:41.506322203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:24:41.517362  756877 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-262039 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 08:24:41.542998  756877 out.go:179] * Pausing node pause-262039 ... 
	I1217 08:24:41.546479  756877 host.go:66] Checking if "pause-262039" exists ...
	I1217 08:24:41.546891  756877 ssh_runner.go:195] Run: systemctl --version
	I1217 08:24:41.546954  756877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:41.568353  756877 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:41.662145  756877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:24:41.684224  756877 pause.go:52] kubelet running: true
	I1217 08:24:41.684290  756877 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:24:41.814443  756877 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:24:41.814568  756877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:24:41.887128  756877 cri.go:89] found id: "ee9a6c3deb88fc6c6f1fa547ca51ad678a76cbaf113b0c24ddc7c6e6bbcfef3e"
	I1217 08:24:41.887153  756877 cri.go:89] found id: "c5ee7a9c5e6d7304c42befa72f53f961be85f98800041c60c399ac5442413f5f"
	I1217 08:24:41.887159  756877 cri.go:89] found id: "5e70bbbc20e84d301f25ac30dd980098a7720809951f862fa94c56ccfb056e41"
	I1217 08:24:41.887163  756877 cri.go:89] found id: "5783821a94e6d85f4b199db1ccfbb591ab533b02e6f538a920a32615cc308417"
	I1217 08:24:41.887166  756877 cri.go:89] found id: "b0b03fe3ffed2a7add26a82beb0a0bd988bae05306d48b406f08a90114b6208c"
	I1217 08:24:41.887170  756877 cri.go:89] found id: "5d39484b93e472fb309e715bd605ea96d70867f894116f752c5b1bc176aeaaa2"
	I1217 08:24:41.887173  756877 cri.go:89] found id: "39a0211fc9fd66fe9d1dd29d5baecd78571910f2c934785530cd4220e349a463"
	I1217 08:24:41.887177  756877 cri.go:89] found id: ""
	I1217 08:24:41.887231  756877 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:24:41.899617  756877 retry.go:31] will retry after 206.765702ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:24:41Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:24:42.107158  756877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:24:42.121116  756877 pause.go:52] kubelet running: false
	I1217 08:24:42.121214  756877 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:24:42.231879  756877 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:24:42.231968  756877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:24:42.304735  756877 cri.go:89] found id: "ee9a6c3deb88fc6c6f1fa547ca51ad678a76cbaf113b0c24ddc7c6e6bbcfef3e"
	I1217 08:24:42.304767  756877 cri.go:89] found id: "c5ee7a9c5e6d7304c42befa72f53f961be85f98800041c60c399ac5442413f5f"
	I1217 08:24:42.304772  756877 cri.go:89] found id: "5e70bbbc20e84d301f25ac30dd980098a7720809951f862fa94c56ccfb056e41"
	I1217 08:24:42.304775  756877 cri.go:89] found id: "5783821a94e6d85f4b199db1ccfbb591ab533b02e6f538a920a32615cc308417"
	I1217 08:24:42.304778  756877 cri.go:89] found id: "b0b03fe3ffed2a7add26a82beb0a0bd988bae05306d48b406f08a90114b6208c"
	I1217 08:24:42.304780  756877 cri.go:89] found id: "5d39484b93e472fb309e715bd605ea96d70867f894116f752c5b1bc176aeaaa2"
	I1217 08:24:42.304783  756877 cri.go:89] found id: "39a0211fc9fd66fe9d1dd29d5baecd78571910f2c934785530cd4220e349a463"
	I1217 08:24:42.304785  756877 cri.go:89] found id: ""
	I1217 08:24:42.304825  756877 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:24:42.318383  756877 retry.go:31] will retry after 521.417257ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:24:42Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:24:42.840134  756877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:24:42.854110  756877 pause.go:52] kubelet running: false
	I1217 08:24:42.854166  756877 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:24:42.968823  756877 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:24:42.968919  756877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:24:43.043609  756877 cri.go:89] found id: "ee9a6c3deb88fc6c6f1fa547ca51ad678a76cbaf113b0c24ddc7c6e6bbcfef3e"
	I1217 08:24:43.043632  756877 cri.go:89] found id: "c5ee7a9c5e6d7304c42befa72f53f961be85f98800041c60c399ac5442413f5f"
	I1217 08:24:43.043637  756877 cri.go:89] found id: "5e70bbbc20e84d301f25ac30dd980098a7720809951f862fa94c56ccfb056e41"
	I1217 08:24:43.043642  756877 cri.go:89] found id: "5783821a94e6d85f4b199db1ccfbb591ab533b02e6f538a920a32615cc308417"
	I1217 08:24:43.043647  756877 cri.go:89] found id: "b0b03fe3ffed2a7add26a82beb0a0bd988bae05306d48b406f08a90114b6208c"
	I1217 08:24:43.043652  756877 cri.go:89] found id: "5d39484b93e472fb309e715bd605ea96d70867f894116f752c5b1bc176aeaaa2"
	I1217 08:24:43.043656  756877 cri.go:89] found id: "39a0211fc9fd66fe9d1dd29d5baecd78571910f2c934785530cd4220e349a463"
	I1217 08:24:43.043660  756877 cri.go:89] found id: ""
	I1217 08:24:43.043723  756877 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:24:43.056614  756877 retry.go:31] will retry after 460.679154ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:24:43Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:24:43.518029  756877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:24:43.532087  756877 pause.go:52] kubelet running: false
	I1217 08:24:43.532147  756877 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:24:43.660294  756877 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:24:43.660402  756877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:24:43.738273  756877 cri.go:89] found id: "ee9a6c3deb88fc6c6f1fa547ca51ad678a76cbaf113b0c24ddc7c6e6bbcfef3e"
	I1217 08:24:43.738295  756877 cri.go:89] found id: "c5ee7a9c5e6d7304c42befa72f53f961be85f98800041c60c399ac5442413f5f"
	I1217 08:24:43.738299  756877 cri.go:89] found id: "5e70bbbc20e84d301f25ac30dd980098a7720809951f862fa94c56ccfb056e41"
	I1217 08:24:43.738303  756877 cri.go:89] found id: "5783821a94e6d85f4b199db1ccfbb591ab533b02e6f538a920a32615cc308417"
	I1217 08:24:43.738305  756877 cri.go:89] found id: "b0b03fe3ffed2a7add26a82beb0a0bd988bae05306d48b406f08a90114b6208c"
	I1217 08:24:43.738308  756877 cri.go:89] found id: "5d39484b93e472fb309e715bd605ea96d70867f894116f752c5b1bc176aeaaa2"
	I1217 08:24:43.738311  756877 cri.go:89] found id: "39a0211fc9fd66fe9d1dd29d5baecd78571910f2c934785530cd4220e349a463"
	I1217 08:24:43.738314  756877 cri.go:89] found id: ""
	I1217 08:24:43.738351  756877 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:24:43.753892  756877 out.go:203] 
	W1217 08:24:43.755442  756877 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:24:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:24:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 08:24:43.755464  756877 out.go:285] * 
	* 
	W1217 08:24:43.761980  756877 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 08:24:43.763733  756877 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-262039 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-262039
helpers_test.go:244: (dbg) docker inspect pause-262039:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4",
	        "Created": "2025-12-17T08:23:39.590968328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742553,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:23:39.930971935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4/hosts",
	        "LogPath": "/var/lib/docker/containers/756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4/756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4-json.log",
	        "Name": "/pause-262039",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-262039:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-262039",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4",
	                "LowerDir": "/var/lib/docker/overlay2/aac8d2fa9a654a6dde10e30f6b206a225dcf38a6e9483b88f9a88d9c49ac8ef5-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aac8d2fa9a654a6dde10e30f6b206a225dcf38a6e9483b88f9a88d9c49ac8ef5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aac8d2fa9a654a6dde10e30f6b206a225dcf38a6e9483b88f9a88d9c49ac8ef5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aac8d2fa9a654a6dde10e30f6b206a225dcf38a6e9483b88f9a88d9c49ac8ef5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-262039",
	                "Source": "/var/lib/docker/volumes/pause-262039/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-262039",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-262039",
	                "name.minikube.sigs.k8s.io": "pause-262039",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d188f108b8a9f59402016e8d2d09099af22f64b09286f3a475ee159967544810",
	            "SandboxKey": "/var/run/docker/netns/d188f108b8a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-262039": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d6b1425cabf49ac0245c0db8995157bcbe363b2318a123863793eae3c8f725b4",
	                    "EndpointID": "e2c1db848f3099589afa6b10edbf71f763b64e989d6a37fb10bd516f36c672c7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "3e:6e:22:9a:41:03",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-262039",
	                        "756222a0391f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-262039 -n pause-262039
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-262039 -n pause-262039: exit status 2 (359.439079ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-262039 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-262039 logs -n 25: (1.004518718s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-309868 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:21 UTC │ 17 Dec 25 08:22 UTC │
	│ stop    │ -p scheduled-stop-309868 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --cancel-scheduled                                                                                              │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │ 17 Dec 25 08:22 UTC │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │ 17 Dec 25 08:22 UTC │
	│ delete  │ -p scheduled-stop-309868                                                                                                                 │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
	│ start   │ -p insufficient-storage-691717 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-691717 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │                     │
	│ delete  │ -p insufficient-storage-691717                                                                                                           │ insufficient-storage-691717 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
	│ start   │ -p offline-crio-077569 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-077569         │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:24 UTC │
	│ start   │ -p pause-262039 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-262039                │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:24 UTC │
	│ start   │ -p missing-upgrade-442124 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-442124      │ jenkins │ v1.35.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:24 UTC │
	│ start   │ -p stopped-upgrade-387280 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-387280      │ jenkins │ v1.35.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:24 UTC │
	│ stop    │ stopped-upgrade-387280 stop                                                                                                              │ stopped-upgrade-387280      │ jenkins │ v1.35.0 │ 17 Dec 25 08:24 UTC │ 17 Dec 25 08:24 UTC │
	│ start   │ -p stopped-upgrade-387280 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-387280      │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │                     │
	│ start   │ -p missing-upgrade-442124 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-442124      │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │                     │
	│ start   │ -p pause-262039 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-262039                │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │ 17 Dec 25 08:24 UTC │
	│ delete  │ -p offline-crio-077569                                                                                                                   │ offline-crio-077569         │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │ 17 Dec 25 08:24 UTC │
	│ start   │ -p kubernetes-upgrade-568559 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-568559   │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │                     │
	│ pause   │ -p pause-262039 --alsologtostderr -v=5                                                                                                   │ pause-262039                │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:24:37
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:24:37.615651  755896 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:24:37.615962  755896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:24:37.615974  755896 out.go:374] Setting ErrFile to fd 2...
	I1217 08:24:37.615979  755896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:24:37.616249  755896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:24:37.616849  755896 out.go:368] Setting JSON to false
	I1217 08:24:37.618044  755896 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7623,"bootTime":1765952255,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:24:37.618128  755896 start.go:143] virtualization: kvm guest
	I1217 08:24:37.620284  755896 out.go:179] * [kubernetes-upgrade-568559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:24:37.622373  755896 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:24:37.622396  755896 notify.go:221] Checking for updates...
	I1217 08:24:37.625043  755896 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:24:37.626405  755896 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:24:37.628553  755896 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:24:37.635377  755896 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:24:37.640362  755896 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:24:37.642529  755896 config.go:182] Loaded profile config "missing-upgrade-442124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 08:24:37.642764  755896 config.go:182] Loaded profile config "pause-262039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:24:37.642884  755896 config.go:182] Loaded profile config "stopped-upgrade-387280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 08:24:37.643020  755896 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:24:37.678011  755896 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:24:37.678174  755896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:24:37.747407  755896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-17 08:24:37.736113651 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:24:37.747526  755896 docker.go:319] overlay module found
	I1217 08:24:37.749833  755896 out.go:179] * Using the docker driver based on user configuration
	I1217 08:24:37.752073  755896 start.go:309] selected driver: docker
	I1217 08:24:37.752095  755896 start.go:927] validating driver "docker" against <nil>
	I1217 08:24:37.752112  755896 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:24:37.752891  755896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:24:37.820619  755896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-17 08:24:37.809032746 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:24:37.820826  755896 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 08:24:37.821040  755896 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 08:24:37.823674  755896 out.go:179] * Using Docker driver with root privileges
	I1217 08:24:37.825172  755896 cni.go:84] Creating CNI manager for ""
	I1217 08:24:37.825250  755896 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:24:37.825265  755896 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 08:24:37.825378  755896 start.go:353] cluster config:
	{Name:kubernetes-upgrade-568559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-568559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:24:37.827403  755896 out.go:179] * Starting "kubernetes-upgrade-568559" primary control-plane node in "kubernetes-upgrade-568559" cluster
	I1217 08:24:37.828806  755896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:24:37.830139  755896 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:24:37.831341  755896 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 08:24:37.831376  755896 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 08:24:37.831406  755896 cache.go:65] Caching tarball of preloaded images
	I1217 08:24:37.831448  755896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:24:37.831492  755896 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:24:37.831506  755896 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1217 08:24:37.831634  755896 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kubernetes-upgrade-568559/config.json ...
	I1217 08:24:37.831658  755896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kubernetes-upgrade-568559/config.json: {Name:mka23573240d543b092a6289ab89276c12909cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:37.855096  755896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:24:37.855125  755896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:24:37.855145  755896 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:24:37.855186  755896 start.go:360] acquireMachinesLock for kubernetes-upgrade-568559: {Name:mk5636d609bb7e26490f37904bc8f5a2418d7e2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:24:37.855329  755896 start.go:364] duration metric: took 117.906µs to acquireMachinesLock for "kubernetes-upgrade-568559"
	I1217 08:24:37.855362  755896 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-568559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-568559 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:24:37.855444  755896 start.go:125] createHost starting for "" (driver="docker")
	I1217 08:24:34.892519  754490 out.go:252] * Updating the running docker "pause-262039" container ...
	I1217 08:24:34.892566  754490 machine.go:94] provisionDockerMachine start ...
	I1217 08:24:34.892643  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:34.916131  754490 main.go:143] libmachine: Using SSH client type: native
	I1217 08:24:34.916302  754490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1217 08:24:34.916322  754490 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:24:35.054572  754490 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-262039
	
	I1217 08:24:35.054600  754490 ubuntu.go:182] provisioning hostname "pause-262039"
	I1217 08:24:35.054738  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:35.080797  754490 main.go:143] libmachine: Using SSH client type: native
	I1217 08:24:35.080931  754490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1217 08:24:35.080945  754490 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-262039 && echo "pause-262039" | sudo tee /etc/hostname
	I1217 08:24:35.253395  754490 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-262039
	
	I1217 08:24:35.253494  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:35.287266  754490 main.go:143] libmachine: Using SSH client type: native
	I1217 08:24:35.287417  754490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1217 08:24:35.287446  754490 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-262039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-262039/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-262039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:24:35.434248  754490 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:24:35.434289  754490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:24:35.434314  754490 ubuntu.go:190] setting up certificates
	I1217 08:24:35.434326  754490 provision.go:84] configureAuth start
	I1217 08:24:35.434734  754490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-262039
	I1217 08:24:35.464087  754490 provision.go:143] copyHostCerts
	I1217 08:24:35.464267  754490 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:24:35.464328  754490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:24:35.464436  754490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:24:35.464707  754490 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:24:35.464720  754490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:24:35.464767  754490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:24:35.464882  754490 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:24:35.464964  754490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:24:35.465024  754490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:24:35.465173  754490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.pause-262039 san=[127.0.0.1 192.168.76.2 localhost minikube pause-262039]
	I1217 08:24:35.587464  754490 provision.go:177] copyRemoteCerts
	I1217 08:24:35.587624  754490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:24:35.587687  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:35.611823  754490 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:35.717557  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:24:35.739694  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 08:24:35.763757  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 08:24:35.784591  754490 provision.go:87] duration metric: took 350.247491ms to configureAuth
	I1217 08:24:35.784624  754490 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:24:35.784847  754490 config.go:182] Loaded profile config "pause-262039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:24:35.784951  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:35.807992  754490 main.go:143] libmachine: Using SSH client type: native
	I1217 08:24:35.808126  754490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1217 08:24:35.808142  754490 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:24:36.150505  754490 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:24:36.150552  754490 machine.go:97] duration metric: took 1.257977874s to provisionDockerMachine
	I1217 08:24:36.150566  754490 start.go:293] postStartSetup for "pause-262039" (driver="docker")
	I1217 08:24:36.150581  754490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:24:36.150652  754490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:24:36.150697  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:36.172828  754490 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:36.270120  754490 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:24:36.274631  754490 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:24:36.274669  754490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:24:36.274682  754490 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:24:36.274741  754490 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:24:36.274813  754490 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:24:36.274908  754490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:24:36.283986  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:24:36.304091  754490 start.go:296] duration metric: took 153.50314ms for postStartSetup
	I1217 08:24:36.304181  754490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:24:36.304228  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:36.325181  754490 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:36.420978  754490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:24:36.428014  754490 fix.go:56] duration metric: took 1.559371912s for fixHost
	I1217 08:24:36.428108  754490 start.go:83] releasing machines lock for "pause-262039", held for 1.559478106s
	I1217 08:24:36.428197  754490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-262039
	I1217 08:24:36.453160  754490 ssh_runner.go:195] Run: cat /version.json
	I1217 08:24:36.453246  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:36.453266  754490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:24:36.453341  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:36.481238  754490 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:36.481399  754490 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:36.652392  754490 ssh_runner.go:195] Run: systemctl --version
	I1217 08:24:36.661695  754490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:24:36.710178  754490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:24:36.715525  754490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:24:36.715653  754490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:24:36.725897  754490 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 08:24:36.725936  754490 start.go:496] detecting cgroup driver to use...
	I1217 08:24:36.725972  754490 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:24:36.726028  754490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:24:36.743524  754490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:24:36.759920  754490 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:24:36.759983  754490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:24:36.783854  754490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:24:36.801311  754490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:24:36.973840  754490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:24:37.119327  754490 docker.go:234] disabling docker service ...
	I1217 08:24:37.119405  754490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:24:37.137697  754490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:24:37.157010  754490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:24:37.305083  754490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:24:37.457499  754490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:24:37.471668  754490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:24:37.490836  754490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:24:37.490903  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.504421  754490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:24:37.504496  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.514969  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.525453  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.537468  754490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:24:37.549014  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.560901  754490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.570089  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.579967  754490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:24:37.588979  754490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:24:37.597771  754490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:24:37.753260  754490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:24:37.975517  754490 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:24:37.975622  754490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:24:37.980090  754490 start.go:564] Will wait 60s for crictl version
	I1217 08:24:37.980152  754490 ssh_runner.go:195] Run: which crictl
	I1217 08:24:37.984008  754490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:24:38.013157  754490 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:24:38.013250  754490 ssh_runner.go:195] Run: crio --version
	I1217 08:24:38.047128  754490 ssh_runner.go:195] Run: crio --version
	I1217 08:24:38.086744  754490 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 08:24:34.135128  752745 cli_runner.go:164] Run: docker network inspect stopped-upgrade-387280 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:24:34.153894  752745 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 08:24:34.157973  752745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:24:34.171199  752745 kubeadm.go:884] updating cluster {Name:stopped-upgrade-387280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-387280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:24:34.171306  752745 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1217 08:24:34.171354  752745 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:24:34.214514  752745 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:24:34.214550  752745 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:24:34.214606  752745 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:24:34.249793  752745 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:24:34.249819  752745 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:24:34.249828  752745 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.32.0 crio true true} ...
	I1217 08:24:34.249925  752745 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-387280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-387280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:24:34.249994  752745 ssh_runner.go:195] Run: crio config
	I1217 08:24:34.294227  752745 cni.go:84] Creating CNI manager for ""
	I1217 08:24:34.294254  752745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:24:34.294272  752745 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:24:34.294292  752745 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-387280 NodeName:stopped-upgrade-387280 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:24:34.294433  752745 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-387280"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:24:34.294498  752745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1217 08:24:34.304319  752745 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:24:34.304387  752745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:24:34.314285  752745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1217 08:24:34.333665  752745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:24:34.353513  752745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 08:24:34.373210  752745 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:24:34.377165  752745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:24:34.389424  752745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:24:34.455398  752745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:24:34.478621  752745 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280 for IP: 192.168.103.2
	I1217 08:24:34.478645  752745 certs.go:195] generating shared ca certs ...
	I1217 08:24:34.478664  752745 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:34.478811  752745 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:24:34.478849  752745 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:24:34.478860  752745 certs.go:257] generating profile certs ...
	I1217 08:24:34.478954  752745 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/client.key
	I1217 08:24:34.479026  752745 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/apiserver.key.0c52ecad
	I1217 08:24:34.479063  752745 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/proxy-client.key
	I1217 08:24:34.479170  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:24:34.479217  752745 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:24:34.479230  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:24:34.479256  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:24:34.479282  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:24:34.479308  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:24:34.479351  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:24:34.479947  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:24:34.510421  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:24:34.541230  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:24:34.578498  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:24:34.609114  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 08:24:34.636714  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:24:34.668471  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:24:34.698063  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:24:34.730147  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:24:34.760706  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:24:34.794375  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:24:34.828219  752745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:24:34.852719  752745 ssh_runner.go:195] Run: openssl version
	I1217 08:24:34.860318  752745 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:24:34.872130  752745 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:24:34.884065  752745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:24:34.888895  752745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:24:34.888948  752745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:24:34.896591  752745 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:24:34.909284  752745 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:24:34.920628  752745 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:24:34.930414  752745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:24:34.934896  752745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:24:34.934978  752745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:24:34.944045  752745 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:24:34.954077  752745 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:34.965397  752745 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:24:34.976959  752745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:34.981135  752745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:34.981204  752745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:34.990573  752745 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:24:35.001255  752745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:24:35.005495  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 08:24:35.013088  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 08:24:35.020522  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 08:24:35.028475  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 08:24:35.037331  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 08:24:35.045484  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 08:24:35.053960  752745 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-387280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-387280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:24:35.054071  752745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:24:35.054143  752745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:24:35.116244  752745 cri.go:89] found id: ""
	I1217 08:24:35.116513  752745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:24:35.139202  752745 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 08:24:35.139227  752745 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 08:24:35.139286  752745 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 08:24:35.156138  752745 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:24:35.157349  752745 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-387280" does not appear in /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:24:35.158038  752745 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-552461/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-387280" cluster setting kubeconfig missing "stopped-upgrade-387280" context setting]
	I1217 08:24:35.159053  752745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:35.160317  752745 kapi.go:59] client config for stopped-upgrade-387280: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/client.key", CAFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 08:24:35.161081  752745 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 08:24:35.161107  752745 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 08:24:35.161114  752745 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 08:24:35.161121  752745 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 08:24:35.161127  752745 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 08:24:35.161616  752745 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 08:24:35.175769  752745 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 08:24:14.208017901 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 08:24:34.370430964 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1217 08:24:35.175846  752745 kubeadm.go:1161] stopping kube-system containers ...
	I1217 08:24:35.175869  752745 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 08:24:35.175930  752745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:24:35.248765  752745 cri.go:89] found id: "a78ad306f0dda4bec4ab84ea319926ee2c424d1293fe655a1112e3e41147254f"
	I1217 08:24:35.248997  752745 cri.go:89] found id: "b4d42c183d8713539c409a6f3197c9fef68188e04227888130c078bb5518e83f"
	I1217 08:24:35.249024  752745 cri.go:89] found id: "776aefbc0e5cb953d7d558fee0bf6a02210f34d748acaa243f9596852e21debc"
	I1217 08:24:35.249072  752745 cri.go:89] found id: "68a7512cc355764576a7d4eaa3332d536ea531543586a239a8ef276443dd0c33"
	I1217 08:24:35.249098  752745 cri.go:89] found id: ""
	I1217 08:24:35.249110  752745 cri.go:252] Stopping containers: [a78ad306f0dda4bec4ab84ea319926ee2c424d1293fe655a1112e3e41147254f b4d42c183d8713539c409a6f3197c9fef68188e04227888130c078bb5518e83f 776aefbc0e5cb953d7d558fee0bf6a02210f34d748acaa243f9596852e21debc 68a7512cc355764576a7d4eaa3332d536ea531543586a239a8ef276443dd0c33]
	I1217 08:24:35.249200  752745 ssh_runner.go:195] Run: which crictl
	I1217 08:24:35.256212  752745 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 a78ad306f0dda4bec4ab84ea319926ee2c424d1293fe655a1112e3e41147254f b4d42c183d8713539c409a6f3197c9fef68188e04227888130c078bb5518e83f 776aefbc0e5cb953d7d558fee0bf6a02210f34d748acaa243f9596852e21debc 68a7512cc355764576a7d4eaa3332d536ea531543586a239a8ef276443dd0c33
	I1217 08:24:38.088658  754490 cli_runner.go:164] Run: docker network inspect pause-262039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:24:38.109257  754490 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 08:24:38.114036  754490 kubeadm.go:884] updating cluster {Name:pause-262039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-262039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:24:38.114253  754490 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:24:38.114342  754490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:24:38.155067  754490 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:24:38.155092  754490 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:24:38.155150  754490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:24:38.184976  754490 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:24:38.185006  754490 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:24:38.185016  754490 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1217 08:24:38.185143  754490 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-262039 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-262039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:24:38.185224  754490 ssh_runner.go:195] Run: crio config
	I1217 08:24:38.244368  754490 cni.go:84] Creating CNI manager for ""
	I1217 08:24:38.244397  754490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:24:38.244423  754490 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:24:38.244455  754490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-262039 NodeName:pause-262039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:24:38.244660  754490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-262039"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:24:38.244741  754490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 08:24:38.253612  754490 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:24:38.253676  754490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:24:38.261948  754490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1217 08:24:38.276114  754490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:24:38.293654  754490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1217 08:24:38.315984  754490 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:24:38.320507  754490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:24:38.455834  754490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:24:38.472730  754490 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039 for IP: 192.168.76.2
	I1217 08:24:38.472757  754490 certs.go:195] generating shared ca certs ...
	I1217 08:24:38.472781  754490 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:38.472990  754490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:24:38.473050  754490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:24:38.473064  754490 certs.go:257] generating profile certs ...
	I1217 08:24:38.473173  754490 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/client.key
	I1217 08:24:38.473240  754490 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/apiserver.key.d74e051e
	I1217 08:24:38.473304  754490 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/proxy-client.key
	I1217 08:24:38.473472  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:24:38.473518  754490 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:24:38.473543  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:24:38.473582  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:24:38.473616  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:24:38.473652  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:24:38.473714  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:24:38.474526  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:24:38.495973  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:24:38.519170  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:24:38.538988  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:24:38.560112  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 08:24:38.579898  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 08:24:38.611812  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:24:38.635025  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 08:24:38.662804  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:24:38.684830  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:24:38.705469  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:24:38.728352  754490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:24:38.747605  754490 ssh_runner.go:195] Run: openssl version
	I1217 08:24:38.755061  754490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:24:38.764464  754490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:24:38.773001  754490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:24:38.777177  754490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:24:38.777254  754490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:24:38.814977  754490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:24:38.823547  754490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:24:38.831651  754490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:24:38.841499  754490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:24:38.845785  754490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:24:38.845857  754490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:24:38.881567  754490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:24:38.889980  754490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:38.898303  754490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:24:38.906586  754490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:38.910965  754490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:38.911039  754490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:38.946634  754490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:24:38.955550  754490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:24:38.960011  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 08:24:38.996135  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 08:24:39.031316  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 08:24:39.070161  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 08:24:39.106248  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 08:24:39.141394  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 08:24:39.176665  754490 kubeadm.go:401] StartCluster: {Name:pause-262039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-262039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:24:39.176802  754490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:24:39.176865  754490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:24:39.207924  754490 cri.go:89] found id: "ee9a6c3deb88fc6c6f1fa547ca51ad678a76cbaf113b0c24ddc7c6e6bbcfef3e"
	I1217 08:24:39.207955  754490 cri.go:89] found id: "c5ee7a9c5e6d7304c42befa72f53f961be85f98800041c60c399ac5442413f5f"
	I1217 08:24:39.207962  754490 cri.go:89] found id: "5e70bbbc20e84d301f25ac30dd980098a7720809951f862fa94c56ccfb056e41"
	I1217 08:24:39.207968  754490 cri.go:89] found id: "5783821a94e6d85f4b199db1ccfbb591ab533b02e6f538a920a32615cc308417"
	I1217 08:24:39.207973  754490 cri.go:89] found id: "b0b03fe3ffed2a7add26a82beb0a0bd988bae05306d48b406f08a90114b6208c"
	I1217 08:24:39.207977  754490 cri.go:89] found id: "5d39484b93e472fb309e715bd605ea96d70867f894116f752c5b1bc176aeaaa2"
	I1217 08:24:39.207982  754490 cri.go:89] found id: "39a0211fc9fd66fe9d1dd29d5baecd78571910f2c934785530cd4220e349a463"
	I1217 08:24:39.207987  754490 cri.go:89] found id: ""
	I1217 08:24:39.208047  754490 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 08:24:39.220523  754490 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:24:39Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:24:39.220627  754490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:24:39.229234  754490 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 08:24:39.229257  754490 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 08:24:39.229311  754490 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 08:24:39.237852  754490 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:24:39.238974  754490 kubeconfig.go:125] found "pause-262039" server: "https://192.168.76.2:8443"
	I1217 08:24:39.240516  754490 kapi.go:59] client config for pause-262039: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/client.key", CAFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 08:24:39.241087  754490 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 08:24:39.241106  754490 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 08:24:39.241113  754490 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 08:24:39.241118  754490 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 08:24:39.241124  754490 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 08:24:39.241485  754490 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 08:24:39.251391  754490 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 08:24:39.251439  754490 kubeadm.go:602] duration metric: took 22.174467ms to restartPrimaryControlPlane
	I1217 08:24:39.251453  754490 kubeadm.go:403] duration metric: took 74.802113ms to StartCluster
	I1217 08:24:39.251475  754490 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:39.251574  754490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:24:39.252654  754490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:39.252915  754490 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:24:39.253022  754490 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:24:39.253151  754490 config.go:182] Loaded profile config "pause-262039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:24:39.259084  754490 out.go:179] * Verifying Kubernetes components...
	I1217 08:24:39.259091  754490 out.go:179] * Enabled addons: 
	I1217 08:24:39.260972  754490 addons.go:530] duration metric: took 7.95559ms for enable addons: enabled=[]
	I1217 08:24:39.261032  754490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:24:39.377312  754490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:24:39.391332  754490 node_ready.go:35] waiting up to 6m0s for node "pause-262039" to be "Ready" ...
	I1217 08:24:39.400693  754490 node_ready.go:49] node "pause-262039" is "Ready"
	I1217 08:24:39.400733  754490 node_ready.go:38] duration metric: took 9.366995ms for node "pause-262039" to be "Ready" ...
	I1217 08:24:39.400749  754490 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:24:39.400812  754490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:24:39.413225  754490 api_server.go:72] duration metric: took 160.264958ms to wait for apiserver process to appear ...
	I1217 08:24:39.413253  754490 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:24:39.413272  754490 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:24:39.418571  754490 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 08:24:39.419775  754490 api_server.go:141] control plane version: v1.34.3
	I1217 08:24:39.419806  754490 api_server.go:131] duration metric: took 6.545117ms to wait for apiserver health ...
	I1217 08:24:39.419818  754490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:24:39.423155  754490 system_pods.go:59] 7 kube-system pods found
	I1217 08:24:39.423186  754490 system_pods.go:61] "coredns-66bc5c9577-sttv4" [016062e9-782c-4d30-b8c9-7792ef42b4c7] Running
	I1217 08:24:39.423192  754490 system_pods.go:61] "etcd-pause-262039" [d1813d6b-9960-4659-903e-25e6f0f601eb] Running
	I1217 08:24:39.423196  754490 system_pods.go:61] "kindnet-jl97s" [1543fcb7-2037-4f57-8878-b172586434df] Running
	I1217 08:24:39.423200  754490 system_pods.go:61] "kube-apiserver-pause-262039" [39900147-0dbf-495c-aaf6-6e26214719fe] Running
	I1217 08:24:39.423203  754490 system_pods.go:61] "kube-controller-manager-pause-262039" [d820ef12-fd27-4318-a365-bb053e355829] Running
	I1217 08:24:39.423206  754490 system_pods.go:61] "kube-proxy-tqfbc" [cf608927-6659-4165-8793-2f3df58e1282] Running
	I1217 08:24:39.423210  754490 system_pods.go:61] "kube-scheduler-pause-262039" [67a48b0f-48d7-4837-8751-ca81f8187eb3] Running
	I1217 08:24:39.423215  754490 system_pods.go:74] duration metric: took 3.391046ms to wait for pod list to return data ...
	I1217 08:24:39.423223  754490 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:24:39.425414  754490 default_sa.go:45] found service account: "default"
	I1217 08:24:39.425440  754490 default_sa.go:55] duration metric: took 2.211141ms for default service account to be created ...
	I1217 08:24:39.425453  754490 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:24:39.428143  754490 system_pods.go:86] 7 kube-system pods found
	I1217 08:24:39.428179  754490 system_pods.go:89] "coredns-66bc5c9577-sttv4" [016062e9-782c-4d30-b8c9-7792ef42b4c7] Running
	I1217 08:24:39.428197  754490 system_pods.go:89] "etcd-pause-262039" [d1813d6b-9960-4659-903e-25e6f0f601eb] Running
	I1217 08:24:39.428204  754490 system_pods.go:89] "kindnet-jl97s" [1543fcb7-2037-4f57-8878-b172586434df] Running
	I1217 08:24:39.428210  754490 system_pods.go:89] "kube-apiserver-pause-262039" [39900147-0dbf-495c-aaf6-6e26214719fe] Running
	I1217 08:24:39.428217  754490 system_pods.go:89] "kube-controller-manager-pause-262039" [d820ef12-fd27-4318-a365-bb053e355829] Running
	I1217 08:24:39.428223  754490 system_pods.go:89] "kube-proxy-tqfbc" [cf608927-6659-4165-8793-2f3df58e1282] Running
	I1217 08:24:39.428228  754490 system_pods.go:89] "kube-scheduler-pause-262039" [67a48b0f-48d7-4837-8751-ca81f8187eb3] Running
	I1217 08:24:39.428239  754490 system_pods.go:126] duration metric: took 2.778471ms to wait for k8s-apps to be running ...
	I1217 08:24:39.428253  754490 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:24:39.428315  754490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:24:39.442306  754490 system_svc.go:56] duration metric: took 14.040328ms WaitForService to wait for kubelet
	I1217 08:24:39.442340  754490 kubeadm.go:587] duration metric: took 189.384424ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:24:39.442358  754490 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:24:39.445402  754490 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:24:39.445432  754490 node_conditions.go:123] node cpu capacity is 8
	I1217 08:24:39.445445  754490 node_conditions.go:105] duration metric: took 3.081526ms to run NodePressure ...
	I1217 08:24:39.445457  754490 start.go:242] waiting for startup goroutines ...
	I1217 08:24:39.445464  754490 start.go:247] waiting for cluster config update ...
	I1217 08:24:39.445471  754490 start.go:256] writing updated cluster config ...
	I1217 08:24:39.445812  754490 ssh_runner.go:195] Run: rm -f paused
	I1217 08:24:39.450174  754490 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:24:39.451115  754490 kapi.go:59] client config for pause-262039: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/client.key", CAFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 08:24:39.454353  754490 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sttv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.459039  754490 pod_ready.go:94] pod "coredns-66bc5c9577-sttv4" is "Ready"
	I1217 08:24:39.459071  754490 pod_ready.go:86] duration metric: took 4.688037ms for pod "coredns-66bc5c9577-sttv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.461549  754490 pod_ready.go:83] waiting for pod "etcd-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.466268  754490 pod_ready.go:94] pod "etcd-pause-262039" is "Ready"
	I1217 08:24:39.466294  754490 pod_ready.go:86] duration metric: took 4.720841ms for pod "etcd-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.468703  754490 pod_ready.go:83] waiting for pod "kube-apiserver-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.473300  754490 pod_ready.go:94] pod "kube-apiserver-pause-262039" is "Ready"
	I1217 08:24:39.473331  754490 pod_ready.go:86] duration metric: took 4.601309ms for pod "kube-apiserver-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.475485  754490 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:38.415699  753483 cli_runner.go:164] Run: docker container inspect missing-upgrade-442124 --format={{.State.Status}}
	W1217 08:24:38.437190  753483 cli_runner.go:211] docker container inspect missing-upgrade-442124 --format={{.State.Status}} returned with exit code 1
	I1217 08:24:38.437253  753483 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-442124": docker container inspect missing-upgrade-442124 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-442124
	I1217 08:24:38.437275  753483 oci.go:673] temporary error: container missing-upgrade-442124 status is  but expect it to be exited
	I1217 08:24:38.437318  753483 retry.go:31] will retry after 2.851991715s: couldn't verify container is exited. %v: unknown state "missing-upgrade-442124": docker container inspect missing-upgrade-442124 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-442124
	I1217 08:24:39.855053  754490 pod_ready.go:94] pod "kube-controller-manager-pause-262039" is "Ready"
	I1217 08:24:39.855086  754490 pod_ready.go:86] duration metric: took 379.575091ms for pod "kube-controller-manager-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:40.055578  754490 pod_ready.go:83] waiting for pod "kube-proxy-tqfbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:40.455331  754490 pod_ready.go:94] pod "kube-proxy-tqfbc" is "Ready"
	I1217 08:24:40.455367  754490 pod_ready.go:86] duration metric: took 399.760034ms for pod "kube-proxy-tqfbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:40.654625  754490 pod_ready.go:83] waiting for pod "kube-scheduler-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:41.055027  754490 pod_ready.go:94] pod "kube-scheduler-pause-262039" is "Ready"
	I1217 08:24:41.055063  754490 pod_ready.go:86] duration metric: took 400.406972ms for pod "kube-scheduler-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:41.055078  754490 pod_ready.go:40] duration metric: took 1.604863415s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:24:41.102188  754490 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:24:41.241216  754490 out.go:179] * Done! kubectl is now configured to use "pause-262039" cluster and "default" namespace by default
	I1217 08:24:37.857658  755896 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 08:24:37.857920  755896 start.go:159] libmachine.API.Create for "kubernetes-upgrade-568559" (driver="docker")
	I1217 08:24:37.857974  755896 client.go:173] LocalClient.Create starting
	I1217 08:24:37.858044  755896 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 08:24:37.858078  755896 main.go:143] libmachine: Decoding PEM data...
	I1217 08:24:37.858100  755896 main.go:143] libmachine: Parsing certificate...
	I1217 08:24:37.858163  755896 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 08:24:37.858184  755896 main.go:143] libmachine: Decoding PEM data...
	I1217 08:24:37.858196  755896 main.go:143] libmachine: Parsing certificate...
	I1217 08:24:37.858620  755896 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-568559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 08:24:37.879276  755896 cli_runner.go:211] docker network inspect kubernetes-upgrade-568559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 08:24:37.879348  755896 network_create.go:284] running [docker network inspect kubernetes-upgrade-568559] to gather additional debugging logs...
	I1217 08:24:37.879366  755896 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-568559
	W1217 08:24:37.897970  755896 cli_runner.go:211] docker network inspect kubernetes-upgrade-568559 returned with exit code 1
	I1217 08:24:37.898017  755896 network_create.go:287] error running [docker network inspect kubernetes-upgrade-568559]: docker network inspect kubernetes-upgrade-568559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-568559 not found
	I1217 08:24:37.898035  755896 network_create.go:289] output of [docker network inspect kubernetes-upgrade-568559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-568559 not found
	
	** /stderr **
	I1217 08:24:37.898185  755896 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:24:37.917229  755896 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-971513c2879b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:b9:48:a1:bc:14} reservation:<nil>}
	I1217 08:24:37.917742  755896 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d3a8438f2b04 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:22:9a:90:c8:31} reservation:<nil>}
	I1217 08:24:37.918195  755896 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-270f10fabfc5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:f8:c6:e8:84:c2} reservation:<nil>}
	I1217 08:24:37.918820  755896 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d6b1425cabf4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:5b:37:e9:24:53} reservation:<nil>}
	I1217 08:24:37.919730  755896 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020620c0}
	I1217 08:24:37.919761  755896 network_create.go:124] attempt to create docker network kubernetes-upgrade-568559 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 08:24:37.919825  755896 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-568559 kubernetes-upgrade-568559
	I1217 08:24:37.976193  755896 network_create.go:108] docker network kubernetes-upgrade-568559 192.168.85.0/24 created
	I1217 08:24:37.976226  755896 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-568559" container
	I1217 08:24:37.976293  755896 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 08:24:37.996077  755896 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-568559 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-568559 --label created_by.minikube.sigs.k8s.io=true
	I1217 08:24:38.018604  755896 oci.go:103] Successfully created a docker volume kubernetes-upgrade-568559
	I1217 08:24:38.018775  755896 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-568559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-568559 --entrypoint /usr/bin/test -v kubernetes-upgrade-568559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 08:24:38.451801  755896 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-568559
	I1217 08:24:38.451900  755896 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 08:24:38.451919  755896 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 08:24:38.452010  755896 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-568559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.87170698Z" level=info msg="RDT not available in the host system"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.871721921Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.872637622Z" level=info msg="Conmon does support the --sync option"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.872664334Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.87268259Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.873498436Z" level=info msg="Conmon does support the --sync option"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.873525304Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.879573687Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.879596182Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.880188601Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.880607986Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.880668431Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.970285374Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-sttv4 Namespace:kube-system ID:30cb85fe0df5bcd17b7f3b7e1dc9a53d83c2d136eec1926592862da3c7cbe9fd UID:016062e9-782c-4d30-b8c9-7792ef42b4c7 NetNS:/var/run/netns/9a3ea54b-8bb9-4f93-ba46-382490e8d8b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003c2198}] Aliases:map[]}"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.9705702Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-sttv4 for CNI network kindnet (type=ptp)"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971179611Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971204175Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971245341Z" level=info msg="Create NRI interface"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971368644Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971382906Z" level=info msg="runtime interface created"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971393209Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971398911Z" level=info msg="runtime interface starting up..."
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971403752Z" level=info msg="starting plugins..."
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971420902Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971781038Z" level=info msg="No systemd watchdog enabled"
	Dec 17 08:24:37 pause-262039 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ee9a6c3deb88f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     12 seconds ago      Running             coredns                   0                   30cb85fe0df5b       coredns-66bc5c9577-sttv4               kube-system
	c5ee7a9c5e6d7       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   23 seconds ago      Running             kindnet-cni               0                   c5ae85618ebed       kindnet-jl97s                          kube-system
	5e70bbbc20e84       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     26 seconds ago      Running             kube-proxy                0                   5a2a85bf4cb1f       kube-proxy-tqfbc                       kube-system
	5783821a94e6d       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     37 seconds ago      Running             kube-scheduler            0                   40b22c24cb792       kube-scheduler-pause-262039            kube-system
	b0b03fe3ffed2       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     37 seconds ago      Running             kube-apiserver            0                   511881e3bede3       kube-apiserver-pause-262039            kube-system
	5d39484b93e47       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     37 seconds ago      Running             kube-controller-manager   0                   e166481d52398       kube-controller-manager-pause-262039   kube-system
	39a0211fc9fd6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     37 seconds ago      Running             etcd                      0                   07c7b0c966ea5       etcd-pause-262039                      kube-system
	
	
	==> coredns [ee9a6c3deb88fc6c6f1fa547ca51ad678a76cbaf113b0c24ddc7c6e6bbcfef3e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53359 - 11255 "HINFO IN 3028947443263452986.4233501016873813547. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.040755692s
	
	
	==> describe nodes <==
	Name:               pause-262039
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-262039
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=pause-262039
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_24_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:24:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-262039
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:24:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:24:31 +0000   Wed, 17 Dec 2025 08:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:24:31 +0000   Wed, 17 Dec 2025 08:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:24:31 +0000   Wed, 17 Dec 2025 08:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:24:31 +0000   Wed, 17 Dec 2025 08:24:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-262039
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                5fe93599-bb52-4bac-8aed-adc24791beca
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-sttv4                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-262039                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-jl97s                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-262039             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-262039    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-tqfbc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-262039             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node pause-262039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node pause-262039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node pause-262039 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-262039 event: Registered Node pause-262039 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-262039 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 17 bb 9f 9a 4b 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 91 37 97 9f 01 08 06
	[Dec17 07:52] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.033977] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.024926] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.022908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.023867] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +2.047880] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +4.032673] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +8.190487] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[ +16.382857] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	
	
	==> etcd [39a0211fc9fd66fe9d1dd29d5baecd78571910f2c934785530cd4220e349a463] <==
	{"level":"warn","ts":"2025-12-17T08:24:09.093800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.121076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.133396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.167923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.223304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.233493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.245291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.257979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.270076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.284121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.298755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.313702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.327253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.338232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.356769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.381016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.397518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.415768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.446839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.453957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.466099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.485128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.496467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.506103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.576716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52426","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:24:44 up  2:07,  0 user,  load average: 4.47, 1.95, 1.65
	Linux pause-262039 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c5ee7a9c5e6d7304c42befa72f53f961be85f98800041c60c399ac5442413f5f] <==
	I1217 08:24:21.122331       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:24:21.214844       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 08:24:21.215044       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:24:21.215069       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:24:21.215102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:24:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:24:21.419031       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:24:21.419085       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:24:21.419096       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:24:21.419676       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:24:21.819841       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:24:21.819870       1 metrics.go:72] Registering metrics
	I1217 08:24:21.819910       1 controller.go:711] "Syncing nftables rules"
	I1217 08:24:31.421626       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:24:31.421720       1 main.go:301] handling current node
	I1217 08:24:41.426649       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:24:41.426693       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b0b03fe3ffed2a7add26a82beb0a0bd988bae05306d48b406f08a90114b6208c] <==
	I1217 08:24:10.183190       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 08:24:10.183759       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:24:10.183802       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1217 08:24:10.188694       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:24:10.188766       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 08:24:10.193964       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:24:10.194168       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 08:24:10.366351       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:24:11.082713       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 08:24:11.086826       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 08:24:11.086847       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:24:11.677828       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:24:11.735365       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:24:11.889718       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 08:24:11.897497       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1217 08:24:11.898859       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:24:11.903952       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:24:12.096601       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:24:12.869576       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:24:12.884588       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 08:24:12.894920       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 08:24:17.800688       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 08:24:18.006924       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:24:18.014581       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:24:18.202035       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5d39484b93e472fb309e715bd605ea96d70867f894116f752c5b1bc176aeaaa2] <==
	I1217 08:24:17.093954       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 08:24:17.094015       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:24:17.094030       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 08:24:17.094041       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 08:24:17.094202       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 08:24:17.094218       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 08:24:17.095416       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 08:24:17.095444       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 08:24:17.095521       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 08:24:17.095598       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 08:24:17.095648       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:24:17.095701       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 08:24:17.095714       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 08:24:17.095902       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 08:24:17.096047       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 08:24:17.096114       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 08:24:17.096216       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 08:24:17.097364       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 08:24:17.099652       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 08:24:17.100788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 08:24:17.100826       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 08:24:17.103126       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:24:17.104217       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:24:17.126693       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:24:32.048034       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5e70bbbc20e84d301f25ac30dd980098a7720809951f862fa94c56ccfb056e41] <==
	I1217 08:24:18.277492       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:24:18.359226       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:24:18.459740       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:24:18.459786       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 08:24:18.459908       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:24:18.515101       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:24:18.515236       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:24:18.527413       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:24:18.528016       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:24:18.528077       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:24:18.530113       1 config.go:200] "Starting service config controller"
	I1217 08:24:18.530176       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:24:18.530218       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:24:18.530224       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:24:18.530237       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:24:18.530242       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:24:18.530647       1 config.go:309] "Starting node config controller"
	I1217 08:24:18.530658       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:24:18.530665       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:24:18.631634       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:24:18.631745       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:24:18.631729       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5783821a94e6d85f4b199db1ccfbb591ab533b02e6f538a920a32615cc308417] <==
	E1217 08:24:10.144751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 08:24:10.144780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 08:24:10.144815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 08:24:10.144903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 08:24:10.144968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 08:24:10.145048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 08:24:10.145061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 08:24:10.145152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 08:24:10.145288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 08:24:10.146250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 08:24:10.148870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 08:24:10.949555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 08:24:11.087177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 08:24:11.093449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 08:24:11.094300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 08:24:11.102859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 08:24:11.109217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 08:24:11.182860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 08:24:11.203035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 08:24:11.233181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 08:24:11.284824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 08:24:11.285641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 08:24:11.318268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 08:24:11.356010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1217 08:24:13.039785       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:24:13 pause-262039 kubelet[1315]: I1217 08:24:13.884687    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-262039" podStartSLOduration=1.884671815 podStartE2EDuration="1.884671815s" podCreationTimestamp="2025-12-17 08:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:24:13.884299502 +0000 UTC m=+1.266419698" watchObservedRunningTime="2025-12-17 08:24:13.884671815 +0000 UTC m=+1.266792011"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.083305    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.084012    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.837264    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf608927-6659-4165-8793-2f3df58e1282-kube-proxy\") pod \"kube-proxy-tqfbc\" (UID: \"cf608927-6659-4165-8793-2f3df58e1282\") " pod="kube-system/kube-proxy-tqfbc"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.837312    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf608927-6659-4165-8793-2f3df58e1282-xtables-lock\") pod \"kube-proxy-tqfbc\" (UID: \"cf608927-6659-4165-8793-2f3df58e1282\") " pod="kube-system/kube-proxy-tqfbc"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.837337    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bszlr\" (UniqueName: \"kubernetes.io/projected/cf608927-6659-4165-8793-2f3df58e1282-kube-api-access-bszlr\") pod \"kube-proxy-tqfbc\" (UID: \"cf608927-6659-4165-8793-2f3df58e1282\") " pod="kube-system/kube-proxy-tqfbc"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.837366    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf608927-6659-4165-8793-2f3df58e1282-lib-modules\") pod \"kube-proxy-tqfbc\" (UID: \"cf608927-6659-4165-8793-2f3df58e1282\") " pod="kube-system/kube-proxy-tqfbc"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.938044    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td6lx\" (UniqueName: \"kubernetes.io/projected/1543fcb7-2037-4f57-8878-b172586434df-kube-api-access-td6lx\") pod \"kindnet-jl97s\" (UID: \"1543fcb7-2037-4f57-8878-b172586434df\") " pod="kube-system/kindnet-jl97s"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.938797    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1543fcb7-2037-4f57-8878-b172586434df-cni-cfg\") pod \"kindnet-jl97s\" (UID: \"1543fcb7-2037-4f57-8878-b172586434df\") " pod="kube-system/kindnet-jl97s"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.938833    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1543fcb7-2037-4f57-8878-b172586434df-lib-modules\") pod \"kindnet-jl97s\" (UID: \"1543fcb7-2037-4f57-8878-b172586434df\") " pod="kube-system/kindnet-jl97s"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.938868    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1543fcb7-2037-4f57-8878-b172586434df-xtables-lock\") pod \"kindnet-jl97s\" (UID: \"1543fcb7-2037-4f57-8878-b172586434df\") " pod="kube-system/kindnet-jl97s"
	Dec 17 08:24:21 pause-262039 kubelet[1315]: I1217 08:24:21.804414    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tqfbc" podStartSLOduration=4.804388231 podStartE2EDuration="4.804388231s" podCreationTimestamp="2025-12-17 08:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:24:18.862822591 +0000 UTC m=+6.244942786" watchObservedRunningTime="2025-12-17 08:24:21.804388231 +0000 UTC m=+9.186508426"
	Dec 17 08:24:21 pause-262039 kubelet[1315]: I1217 08:24:21.804520    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jl97s" podStartSLOduration=2.069317733 podStartE2EDuration="4.804515959s" podCreationTimestamp="2025-12-17 08:24:17 +0000 UTC" firstStartedPulling="2025-12-17 08:24:18.150207731 +0000 UTC m=+5.532327919" lastFinishedPulling="2025-12-17 08:24:20.885405955 +0000 UTC m=+8.267526145" observedRunningTime="2025-12-17 08:24:21.804193235 +0000 UTC m=+9.186313453" watchObservedRunningTime="2025-12-17 08:24:21.804515959 +0000 UTC m=+9.186636153"
	Dec 17 08:24:31 pause-262039 kubelet[1315]: I1217 08:24:31.566280    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 08:24:31 pause-262039 kubelet[1315]: I1217 08:24:31.642702    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfttf\" (UniqueName: \"kubernetes.io/projected/016062e9-782c-4d30-b8c9-7792ef42b4c7-kube-api-access-bfttf\") pod \"coredns-66bc5c9577-sttv4\" (UID: \"016062e9-782c-4d30-b8c9-7792ef42b4c7\") " pod="kube-system/coredns-66bc5c9577-sttv4"
	Dec 17 08:24:31 pause-262039 kubelet[1315]: I1217 08:24:31.642764    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/016062e9-782c-4d30-b8c9-7792ef42b4c7-config-volume\") pod \"coredns-66bc5c9577-sttv4\" (UID: \"016062e9-782c-4d30-b8c9-7792ef42b4c7\") " pod="kube-system/coredns-66bc5c9577-sttv4"
	Dec 17 08:24:32 pause-262039 kubelet[1315]: I1217 08:24:32.852519    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sttv4" podStartSLOduration=14.852492179 podStartE2EDuration="14.852492179s" podCreationTimestamp="2025-12-17 08:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:24:32.838367625 +0000 UTC m=+20.220487832" watchObservedRunningTime="2025-12-17 08:24:32.852492179 +0000 UTC m=+20.234612374"
	Dec 17 08:24:37 pause-262039 kubelet[1315]: W1217 08:24:37.835287    1315 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 08:24:37 pause-262039 kubelet[1315]: E1217 08:24:37.835419    1315 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 08:24:37 pause-262039 kubelet[1315]: E1217 08:24:37.835478    1315 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 08:24:37 pause-262039 kubelet[1315]: E1217 08:24:37.835495    1315 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 08:24:41 pause-262039 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:24:41 pause-262039 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:24:41 pause-262039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 08:24:41 pause-262039 systemd[1]: kubelet.service: Consumed 1.328s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-262039 -n pause-262039
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-262039 -n pause-262039: exit status 2 (336.988297ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-262039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-262039
helpers_test.go:244: (dbg) docker inspect pause-262039:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4",
	        "Created": "2025-12-17T08:23:39.590968328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742553,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:23:39.930971935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4/hosts",
	        "LogPath": "/var/lib/docker/containers/756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4/756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4-json.log",
	        "Name": "/pause-262039",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-262039:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-262039",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "756222a0391f9ee23cc50a8dd1fb52b5836804744bc8faf2a6f931df087009f4",
	                "LowerDir": "/var/lib/docker/overlay2/aac8d2fa9a654a6dde10e30f6b206a225dcf38a6e9483b88f9a88d9c49ac8ef5-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aac8d2fa9a654a6dde10e30f6b206a225dcf38a6e9483b88f9a88d9c49ac8ef5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aac8d2fa9a654a6dde10e30f6b206a225dcf38a6e9483b88f9a88d9c49ac8ef5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aac8d2fa9a654a6dde10e30f6b206a225dcf38a6e9483b88f9a88d9c49ac8ef5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-262039",
	                "Source": "/var/lib/docker/volumes/pause-262039/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-262039",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-262039",
	                "name.minikube.sigs.k8s.io": "pause-262039",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d188f108b8a9f59402016e8d2d09099af22f64b09286f3a475ee159967544810",
	            "SandboxKey": "/var/run/docker/netns/d188f108b8a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-262039": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d6b1425cabf49ac0245c0db8995157bcbe363b2318a123863793eae3c8f725b4",
	                    "EndpointID": "e2c1db848f3099589afa6b10edbf71f763b64e989d6a37fb10bd516f36c672c7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "3e:6e:22:9a:41:03",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-262039",
	                        "756222a0391f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-262039 -n pause-262039
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-262039 -n pause-262039: exit status 2 (357.076482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-262039 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-262039 logs -n 25: (1.061342142s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-309868 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:21 UTC │ 17 Dec 25 08:22 UTC │
	│ stop    │ -p scheduled-stop-309868 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --cancel-scheduled                                                                                              │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │ 17 Dec 25 08:22 UTC │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │                     │
	│ stop    │ -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │ 17 Dec 25 08:22 UTC │
	│ delete  │ -p scheduled-stop-309868                                                                                                                 │ scheduled-stop-309868       │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
	│ start   │ -p insufficient-storage-691717 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-691717 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │                     │
	│ delete  │ -p insufficient-storage-691717                                                                                                           │ insufficient-storage-691717 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
	│ start   │ -p offline-crio-077569 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-077569         │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:24 UTC │
	│ start   │ -p pause-262039 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-262039                │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:24 UTC │
	│ start   │ -p missing-upgrade-442124 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-442124      │ jenkins │ v1.35.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:24 UTC │
	│ start   │ -p stopped-upgrade-387280 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-387280      │ jenkins │ v1.35.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:24 UTC │
	│ stop    │ stopped-upgrade-387280 stop                                                                                                              │ stopped-upgrade-387280      │ jenkins │ v1.35.0 │ 17 Dec 25 08:24 UTC │ 17 Dec 25 08:24 UTC │
	│ start   │ -p stopped-upgrade-387280 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-387280      │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │                     │
	│ start   │ -p missing-upgrade-442124 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-442124      │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │                     │
	│ start   │ -p pause-262039 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-262039                │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │ 17 Dec 25 08:24 UTC │
	│ delete  │ -p offline-crio-077569                                                                                                                   │ offline-crio-077569         │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │ 17 Dec 25 08:24 UTC │
	│ start   │ -p kubernetes-upgrade-568559 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-568559   │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │                     │
	│ pause   │ -p pause-262039 --alsologtostderr -v=5                                                                                                   │ pause-262039                │ jenkins │ v1.37.0 │ 17 Dec 25 08:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:24:37
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:24:37.615651  755896 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:24:37.615962  755896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:24:37.615974  755896 out.go:374] Setting ErrFile to fd 2...
	I1217 08:24:37.615979  755896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:24:37.616249  755896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:24:37.616849  755896 out.go:368] Setting JSON to false
	I1217 08:24:37.618044  755896 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7623,"bootTime":1765952255,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:24:37.618128  755896 start.go:143] virtualization: kvm guest
	I1217 08:24:37.620284  755896 out.go:179] * [kubernetes-upgrade-568559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:24:37.622373  755896 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:24:37.622396  755896 notify.go:221] Checking for updates...
	I1217 08:24:37.625043  755896 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:24:37.626405  755896 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:24:37.628553  755896 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:24:37.635377  755896 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:24:37.640362  755896 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:24:37.642529  755896 config.go:182] Loaded profile config "missing-upgrade-442124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 08:24:37.642764  755896 config.go:182] Loaded profile config "pause-262039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:24:37.642884  755896 config.go:182] Loaded profile config "stopped-upgrade-387280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 08:24:37.643020  755896 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:24:37.678011  755896 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:24:37.678174  755896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:24:37.747407  755896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-17 08:24:37.736113651 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:24:37.747526  755896 docker.go:319] overlay module found
	I1217 08:24:37.749833  755896 out.go:179] * Using the docker driver based on user configuration
	I1217 08:24:37.752073  755896 start.go:309] selected driver: docker
	I1217 08:24:37.752095  755896 start.go:927] validating driver "docker" against <nil>
	I1217 08:24:37.752112  755896 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:24:37.752891  755896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:24:37.820619  755896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-17 08:24:37.809032746 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:24:37.820826  755896 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 08:24:37.821040  755896 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 08:24:37.823674  755896 out.go:179] * Using Docker driver with root privileges
	I1217 08:24:37.825172  755896 cni.go:84] Creating CNI manager for ""
	I1217 08:24:37.825250  755896 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:24:37.825265  755896 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 08:24:37.825378  755896 start.go:353] cluster config:
	{Name:kubernetes-upgrade-568559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-568559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:24:37.827403  755896 out.go:179] * Starting "kubernetes-upgrade-568559" primary control-plane node in "kubernetes-upgrade-568559" cluster
	I1217 08:24:37.828806  755896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:24:37.830139  755896 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:24:37.831341  755896 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 08:24:37.831376  755896 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 08:24:37.831406  755896 cache.go:65] Caching tarball of preloaded images
	I1217 08:24:37.831448  755896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:24:37.831492  755896 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:24:37.831506  755896 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1217 08:24:37.831634  755896 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kubernetes-upgrade-568559/config.json ...
	I1217 08:24:37.831658  755896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kubernetes-upgrade-568559/config.json: {Name:mka23573240d543b092a6289ab89276c12909cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:37.855096  755896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:24:37.855125  755896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:24:37.855145  755896 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:24:37.855186  755896 start.go:360] acquireMachinesLock for kubernetes-upgrade-568559: {Name:mk5636d609bb7e26490f37904bc8f5a2418d7e2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:24:37.855329  755896 start.go:364] duration metric: took 117.906µs to acquireMachinesLock for "kubernetes-upgrade-568559"
	I1217 08:24:37.855362  755896 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-568559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-568559 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:24:37.855444  755896 start.go:125] createHost starting for "" (driver="docker")
	I1217 08:24:34.892519  754490 out.go:252] * Updating the running docker "pause-262039" container ...
	I1217 08:24:34.892566  754490 machine.go:94] provisionDockerMachine start ...
	I1217 08:24:34.892643  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:34.916131  754490 main.go:143] libmachine: Using SSH client type: native
	I1217 08:24:34.916302  754490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1217 08:24:34.916322  754490 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:24:35.054572  754490 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-262039
	
	I1217 08:24:35.054600  754490 ubuntu.go:182] provisioning hostname "pause-262039"
	I1217 08:24:35.054738  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:35.080797  754490 main.go:143] libmachine: Using SSH client type: native
	I1217 08:24:35.080931  754490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1217 08:24:35.080945  754490 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-262039 && echo "pause-262039" | sudo tee /etc/hostname
	I1217 08:24:35.253395  754490 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-262039
	
	I1217 08:24:35.253494  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:35.287266  754490 main.go:143] libmachine: Using SSH client type: native
	I1217 08:24:35.287417  754490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1217 08:24:35.287446  754490 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-262039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-262039/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-262039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:24:35.434248  754490 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:24:35.434289  754490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:24:35.434314  754490 ubuntu.go:190] setting up certificates
	I1217 08:24:35.434326  754490 provision.go:84] configureAuth start
	I1217 08:24:35.434734  754490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-262039
	I1217 08:24:35.464087  754490 provision.go:143] copyHostCerts
	I1217 08:24:35.464267  754490 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:24:35.464328  754490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:24:35.464436  754490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:24:35.464707  754490 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:24:35.464720  754490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:24:35.464767  754490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:24:35.464882  754490 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:24:35.464964  754490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:24:35.465024  754490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:24:35.465173  754490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.pause-262039 san=[127.0.0.1 192.168.76.2 localhost minikube pause-262039]
	I1217 08:24:35.587464  754490 provision.go:177] copyRemoteCerts
	I1217 08:24:35.587624  754490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:24:35.587687  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:35.611823  754490 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:35.717557  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:24:35.739694  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 08:24:35.763757  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 08:24:35.784591  754490 provision.go:87] duration metric: took 350.247491ms to configureAuth
	I1217 08:24:35.784624  754490 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:24:35.784847  754490 config.go:182] Loaded profile config "pause-262039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:24:35.784951  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:35.807992  754490 main.go:143] libmachine: Using SSH client type: native
	I1217 08:24:35.808126  754490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1217 08:24:35.808142  754490 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:24:36.150505  754490 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:24:36.150552  754490 machine.go:97] duration metric: took 1.257977874s to provisionDockerMachine
	I1217 08:24:36.150566  754490 start.go:293] postStartSetup for "pause-262039" (driver="docker")
	I1217 08:24:36.150581  754490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:24:36.150652  754490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:24:36.150697  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:36.172828  754490 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:36.270120  754490 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:24:36.274631  754490 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:24:36.274669  754490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:24:36.274682  754490 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:24:36.274741  754490 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:24:36.274813  754490 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:24:36.274908  754490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:24:36.283986  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:24:36.304091  754490 start.go:296] duration metric: took 153.50314ms for postStartSetup
	I1217 08:24:36.304181  754490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:24:36.304228  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:36.325181  754490 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:36.420978  754490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:24:36.428014  754490 fix.go:56] duration metric: took 1.559371912s for fixHost
	I1217 08:24:36.428108  754490 start.go:83] releasing machines lock for "pause-262039", held for 1.559478106s
	I1217 08:24:36.428197  754490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-262039
	I1217 08:24:36.453160  754490 ssh_runner.go:195] Run: cat /version.json
	I1217 08:24:36.453246  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:36.453266  754490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:24:36.453341  754490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-262039
	I1217 08:24:36.481238  754490 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:36.481399  754490 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/pause-262039/id_ed25519 Username:docker}
	I1217 08:24:36.652392  754490 ssh_runner.go:195] Run: systemctl --version
	I1217 08:24:36.661695  754490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:24:36.710178  754490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:24:36.715525  754490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:24:36.715653  754490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:24:36.725897  754490 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 08:24:36.725936  754490 start.go:496] detecting cgroup driver to use...
	I1217 08:24:36.725972  754490 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:24:36.726028  754490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:24:36.743524  754490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:24:36.759920  754490 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:24:36.759983  754490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:24:36.783854  754490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:24:36.801311  754490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:24:36.973840  754490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:24:37.119327  754490 docker.go:234] disabling docker service ...
	I1217 08:24:37.119405  754490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:24:37.137697  754490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:24:37.157010  754490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:24:37.305083  754490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:24:37.457499  754490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:24:37.471668  754490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:24:37.490836  754490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:24:37.490903  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.504421  754490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:24:37.504496  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.514969  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.525453  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.537468  754490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:24:37.549014  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.560901  754490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.570089  754490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:24:37.579967  754490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:24:37.588979  754490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:24:37.597771  754490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:24:37.753260  754490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:24:37.975517  754490 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:24:37.975622  754490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:24:37.980090  754490 start.go:564] Will wait 60s for crictl version
	I1217 08:24:37.980152  754490 ssh_runner.go:195] Run: which crictl
	I1217 08:24:37.984008  754490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:24:38.013157  754490 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:24:38.013250  754490 ssh_runner.go:195] Run: crio --version
	I1217 08:24:38.047128  754490 ssh_runner.go:195] Run: crio --version
	I1217 08:24:38.086744  754490 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 08:24:34.135128  752745 cli_runner.go:164] Run: docker network inspect stopped-upgrade-387280 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:24:34.153894  752745 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 08:24:34.157973  752745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:24:34.171199  752745 kubeadm.go:884] updating cluster {Name:stopped-upgrade-387280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-387280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:24:34.171306  752745 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1217 08:24:34.171354  752745 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:24:34.214514  752745 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:24:34.214550  752745 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:24:34.214606  752745 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:24:34.249793  752745 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:24:34.249819  752745 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:24:34.249828  752745 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.32.0 crio true true} ...
	I1217 08:24:34.249925  752745 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-387280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-387280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:24:34.249994  752745 ssh_runner.go:195] Run: crio config
	I1217 08:24:34.294227  752745 cni.go:84] Creating CNI manager for ""
	I1217 08:24:34.294254  752745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:24:34.294272  752745 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:24:34.294292  752745 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-387280 NodeName:stopped-upgrade-387280 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:24:34.294433  752745 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-387280"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:24:34.294498  752745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1217 08:24:34.304319  752745 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:24:34.304387  752745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:24:34.314285  752745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1217 08:24:34.333665  752745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:24:34.353513  752745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 08:24:34.373210  752745 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:24:34.377165  752745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:24:34.389424  752745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:24:34.455398  752745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:24:34.478621  752745 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280 for IP: 192.168.103.2
	I1217 08:24:34.478645  752745 certs.go:195] generating shared ca certs ...
	I1217 08:24:34.478664  752745 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:34.478811  752745 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:24:34.478849  752745 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:24:34.478860  752745 certs.go:257] generating profile certs ...
	I1217 08:24:34.478954  752745 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/client.key
	I1217 08:24:34.479026  752745 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/apiserver.key.0c52ecad
	I1217 08:24:34.479063  752745 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/proxy-client.key
	I1217 08:24:34.479170  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:24:34.479217  752745 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:24:34.479230  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:24:34.479256  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:24:34.479282  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:24:34.479308  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:24:34.479351  752745 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:24:34.479947  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:24:34.510421  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:24:34.541230  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:24:34.578498  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:24:34.609114  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 08:24:34.636714  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:24:34.668471  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:24:34.698063  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:24:34.730147  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:24:34.760706  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:24:34.794375  752745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:24:34.828219  752745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:24:34.852719  752745 ssh_runner.go:195] Run: openssl version
	I1217 08:24:34.860318  752745 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:24:34.872130  752745 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:24:34.884065  752745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:24:34.888895  752745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:24:34.888948  752745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:24:34.896591  752745 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:24:34.909284  752745 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:24:34.920628  752745 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:24:34.930414  752745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:24:34.934896  752745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:24:34.934978  752745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:24:34.944045  752745 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:24:34.954077  752745 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:34.965397  752745 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:24:34.976959  752745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:34.981135  752745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:34.981204  752745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:34.990573  752745 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:24:35.001255  752745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:24:35.005495  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 08:24:35.013088  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 08:24:35.020522  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 08:24:35.028475  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 08:24:35.037331  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 08:24:35.045484  752745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 08:24:35.053960  752745 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-387280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-387280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:24:35.054071  752745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:24:35.054143  752745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:24:35.116244  752745 cri.go:89] found id: ""
	I1217 08:24:35.116513  752745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:24:35.139202  752745 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 08:24:35.139227  752745 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 08:24:35.139286  752745 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 08:24:35.156138  752745 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:24:35.157349  752745 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-387280" does not appear in /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:24:35.158038  752745 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-552461/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-387280" cluster setting kubeconfig missing "stopped-upgrade-387280" context setting]
	I1217 08:24:35.159053  752745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:35.160317  752745 kapi.go:59] client config for stopped-upgrade-387280: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/stopped-upgrade-387280/client.key", CAFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 08:24:35.161081  752745 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 08:24:35.161107  752745 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 08:24:35.161114  752745 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 08:24:35.161121  752745 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 08:24:35.161127  752745 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 08:24:35.161616  752745 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 08:24:35.175769  752745 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 08:24:14.208017901 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 08:24:34.370430964 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1217 08:24:35.175846  752745 kubeadm.go:1161] stopping kube-system containers ...
	I1217 08:24:35.175869  752745 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 08:24:35.175930  752745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:24:35.248765  752745 cri.go:89] found id: "a78ad306f0dda4bec4ab84ea319926ee2c424d1293fe655a1112e3e41147254f"
	I1217 08:24:35.248997  752745 cri.go:89] found id: "b4d42c183d8713539c409a6f3197c9fef68188e04227888130c078bb5518e83f"
	I1217 08:24:35.249024  752745 cri.go:89] found id: "776aefbc0e5cb953d7d558fee0bf6a02210f34d748acaa243f9596852e21debc"
	I1217 08:24:35.249072  752745 cri.go:89] found id: "68a7512cc355764576a7d4eaa3332d536ea531543586a239a8ef276443dd0c33"
	I1217 08:24:35.249098  752745 cri.go:89] found id: ""
	I1217 08:24:35.249110  752745 cri.go:252] Stopping containers: [a78ad306f0dda4bec4ab84ea319926ee2c424d1293fe655a1112e3e41147254f b4d42c183d8713539c409a6f3197c9fef68188e04227888130c078bb5518e83f 776aefbc0e5cb953d7d558fee0bf6a02210f34d748acaa243f9596852e21debc 68a7512cc355764576a7d4eaa3332d536ea531543586a239a8ef276443dd0c33]
	I1217 08:24:35.249200  752745 ssh_runner.go:195] Run: which crictl
	I1217 08:24:35.256212  752745 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 a78ad306f0dda4bec4ab84ea319926ee2c424d1293fe655a1112e3e41147254f b4d42c183d8713539c409a6f3197c9fef68188e04227888130c078bb5518e83f 776aefbc0e5cb953d7d558fee0bf6a02210f34d748acaa243f9596852e21debc 68a7512cc355764576a7d4eaa3332d536ea531543586a239a8ef276443dd0c33
	I1217 08:24:38.088658  754490 cli_runner.go:164] Run: docker network inspect pause-262039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:24:38.109257  754490 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 08:24:38.114036  754490 kubeadm.go:884] updating cluster {Name:pause-262039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-262039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:24:38.114253  754490 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:24:38.114342  754490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:24:38.155067  754490 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:24:38.155092  754490 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:24:38.155150  754490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:24:38.184976  754490 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:24:38.185006  754490 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:24:38.185016  754490 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1217 08:24:38.185143  754490 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-262039 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-262039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:24:38.185224  754490 ssh_runner.go:195] Run: crio config
	I1217 08:24:38.244368  754490 cni.go:84] Creating CNI manager for ""
	I1217 08:24:38.244397  754490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:24:38.244423  754490 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:24:38.244455  754490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-262039 NodeName:pause-262039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:24:38.244660  754490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-262039"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:24:38.244741  754490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 08:24:38.253612  754490 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:24:38.253676  754490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:24:38.261948  754490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1217 08:24:38.276114  754490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:24:38.293654  754490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1217 08:24:38.315984  754490 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:24:38.320507  754490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:24:38.455834  754490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:24:38.472730  754490 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039 for IP: 192.168.76.2
	I1217 08:24:38.472757  754490 certs.go:195] generating shared ca certs ...
	I1217 08:24:38.472781  754490 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:38.472990  754490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:24:38.473050  754490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:24:38.473064  754490 certs.go:257] generating profile certs ...
	I1217 08:24:38.473173  754490 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/client.key
	I1217 08:24:38.473240  754490 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/apiserver.key.d74e051e
	I1217 08:24:38.473304  754490 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/proxy-client.key
	I1217 08:24:38.473472  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:24:38.473518  754490 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:24:38.473543  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:24:38.473582  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:24:38.473616  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:24:38.473652  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:24:38.473714  754490 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:24:38.474526  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:24:38.495973  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:24:38.519170  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:24:38.538988  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:24:38.560112  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 08:24:38.579898  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 08:24:38.611812  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:24:38.635025  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 08:24:38.662804  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:24:38.684830  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:24:38.705469  754490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:24:38.728352  754490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:24:38.747605  754490 ssh_runner.go:195] Run: openssl version
	I1217 08:24:38.755061  754490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:24:38.764464  754490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:24:38.773001  754490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:24:38.777177  754490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:24:38.777254  754490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:24:38.814977  754490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:24:38.823547  754490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:24:38.831651  754490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:24:38.841499  754490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:24:38.845785  754490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:24:38.845857  754490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:24:38.881567  754490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:24:38.889980  754490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:38.898303  754490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:24:38.906586  754490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:38.910965  754490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:38.911039  754490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:24:38.946634  754490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:24:38.955550  754490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:24:38.960011  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 08:24:38.996135  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 08:24:39.031316  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 08:24:39.070161  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 08:24:39.106248  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 08:24:39.141394  754490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 08:24:39.176665  754490 kubeadm.go:401] StartCluster: {Name:pause-262039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-262039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:24:39.176802  754490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:24:39.176865  754490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:24:39.207924  754490 cri.go:89] found id: "ee9a6c3deb88fc6c6f1fa547ca51ad678a76cbaf113b0c24ddc7c6e6bbcfef3e"
	I1217 08:24:39.207955  754490 cri.go:89] found id: "c5ee7a9c5e6d7304c42befa72f53f961be85f98800041c60c399ac5442413f5f"
	I1217 08:24:39.207962  754490 cri.go:89] found id: "5e70bbbc20e84d301f25ac30dd980098a7720809951f862fa94c56ccfb056e41"
	I1217 08:24:39.207968  754490 cri.go:89] found id: "5783821a94e6d85f4b199db1ccfbb591ab533b02e6f538a920a32615cc308417"
	I1217 08:24:39.207973  754490 cri.go:89] found id: "b0b03fe3ffed2a7add26a82beb0a0bd988bae05306d48b406f08a90114b6208c"
	I1217 08:24:39.207977  754490 cri.go:89] found id: "5d39484b93e472fb309e715bd605ea96d70867f894116f752c5b1bc176aeaaa2"
	I1217 08:24:39.207982  754490 cri.go:89] found id: "39a0211fc9fd66fe9d1dd29d5baecd78571910f2c934785530cd4220e349a463"
	I1217 08:24:39.207987  754490 cri.go:89] found id: ""
	I1217 08:24:39.208047  754490 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 08:24:39.220523  754490 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:24:39Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:24:39.220627  754490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:24:39.229234  754490 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 08:24:39.229257  754490 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 08:24:39.229311  754490 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 08:24:39.237852  754490 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:24:39.238974  754490 kubeconfig.go:125] found "pause-262039" server: "https://192.168.76.2:8443"
	I1217 08:24:39.240516  754490 kapi.go:59] client config for pause-262039: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/client.key", CAFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 08:24:39.241087  754490 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 08:24:39.241106  754490 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 08:24:39.241113  754490 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 08:24:39.241118  754490 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 08:24:39.241124  754490 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 08:24:39.241485  754490 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 08:24:39.251391  754490 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 08:24:39.251439  754490 kubeadm.go:602] duration metric: took 22.174467ms to restartPrimaryControlPlane
	I1217 08:24:39.251453  754490 kubeadm.go:403] duration metric: took 74.802113ms to StartCluster
	I1217 08:24:39.251475  754490 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:39.251574  754490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:24:39.252654  754490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:24:39.252915  754490 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:24:39.253022  754490 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:24:39.253151  754490 config.go:182] Loaded profile config "pause-262039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:24:39.259084  754490 out.go:179] * Verifying Kubernetes components...
	I1217 08:24:39.259091  754490 out.go:179] * Enabled addons: 
	I1217 08:24:39.260972  754490 addons.go:530] duration metric: took 7.95559ms for enable addons: enabled=[]
	I1217 08:24:39.261032  754490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:24:39.377312  754490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:24:39.391332  754490 node_ready.go:35] waiting up to 6m0s for node "pause-262039" to be "Ready" ...
	I1217 08:24:39.400693  754490 node_ready.go:49] node "pause-262039" is "Ready"
	I1217 08:24:39.400733  754490 node_ready.go:38] duration metric: took 9.366995ms for node "pause-262039" to be "Ready" ...
	I1217 08:24:39.400749  754490 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:24:39.400812  754490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:24:39.413225  754490 api_server.go:72] duration metric: took 160.264958ms to wait for apiserver process to appear ...
	I1217 08:24:39.413253  754490 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:24:39.413272  754490 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:24:39.418571  754490 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 08:24:39.419775  754490 api_server.go:141] control plane version: v1.34.3
	I1217 08:24:39.419806  754490 api_server.go:131] duration metric: took 6.545117ms to wait for apiserver health ...
	I1217 08:24:39.419818  754490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:24:39.423155  754490 system_pods.go:59] 7 kube-system pods found
	I1217 08:24:39.423186  754490 system_pods.go:61] "coredns-66bc5c9577-sttv4" [016062e9-782c-4d30-b8c9-7792ef42b4c7] Running
	I1217 08:24:39.423192  754490 system_pods.go:61] "etcd-pause-262039" [d1813d6b-9960-4659-903e-25e6f0f601eb] Running
	I1217 08:24:39.423196  754490 system_pods.go:61] "kindnet-jl97s" [1543fcb7-2037-4f57-8878-b172586434df] Running
	I1217 08:24:39.423200  754490 system_pods.go:61] "kube-apiserver-pause-262039" [39900147-0dbf-495c-aaf6-6e26214719fe] Running
	I1217 08:24:39.423203  754490 system_pods.go:61] "kube-controller-manager-pause-262039" [d820ef12-fd27-4318-a365-bb053e355829] Running
	I1217 08:24:39.423206  754490 system_pods.go:61] "kube-proxy-tqfbc" [cf608927-6659-4165-8793-2f3df58e1282] Running
	I1217 08:24:39.423210  754490 system_pods.go:61] "kube-scheduler-pause-262039" [67a48b0f-48d7-4837-8751-ca81f8187eb3] Running
	I1217 08:24:39.423215  754490 system_pods.go:74] duration metric: took 3.391046ms to wait for pod list to return data ...
	I1217 08:24:39.423223  754490 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:24:39.425414  754490 default_sa.go:45] found service account: "default"
	I1217 08:24:39.425440  754490 default_sa.go:55] duration metric: took 2.211141ms for default service account to be created ...
	I1217 08:24:39.425453  754490 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:24:39.428143  754490 system_pods.go:86] 7 kube-system pods found
	I1217 08:24:39.428179  754490 system_pods.go:89] "coredns-66bc5c9577-sttv4" [016062e9-782c-4d30-b8c9-7792ef42b4c7] Running
	I1217 08:24:39.428197  754490 system_pods.go:89] "etcd-pause-262039" [d1813d6b-9960-4659-903e-25e6f0f601eb] Running
	I1217 08:24:39.428204  754490 system_pods.go:89] "kindnet-jl97s" [1543fcb7-2037-4f57-8878-b172586434df] Running
	I1217 08:24:39.428210  754490 system_pods.go:89] "kube-apiserver-pause-262039" [39900147-0dbf-495c-aaf6-6e26214719fe] Running
	I1217 08:24:39.428217  754490 system_pods.go:89] "kube-controller-manager-pause-262039" [d820ef12-fd27-4318-a365-bb053e355829] Running
	I1217 08:24:39.428223  754490 system_pods.go:89] "kube-proxy-tqfbc" [cf608927-6659-4165-8793-2f3df58e1282] Running
	I1217 08:24:39.428228  754490 system_pods.go:89] "kube-scheduler-pause-262039" [67a48b0f-48d7-4837-8751-ca81f8187eb3] Running
	I1217 08:24:39.428239  754490 system_pods.go:126] duration metric: took 2.778471ms to wait for k8s-apps to be running ...
	I1217 08:24:39.428253  754490 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:24:39.428315  754490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:24:39.442306  754490 system_svc.go:56] duration metric: took 14.040328ms WaitForService to wait for kubelet
	I1217 08:24:39.442340  754490 kubeadm.go:587] duration metric: took 189.384424ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:24:39.442358  754490 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:24:39.445402  754490 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:24:39.445432  754490 node_conditions.go:123] node cpu capacity is 8
	I1217 08:24:39.445445  754490 node_conditions.go:105] duration metric: took 3.081526ms to run NodePressure ...
	I1217 08:24:39.445457  754490 start.go:242] waiting for startup goroutines ...
	I1217 08:24:39.445464  754490 start.go:247] waiting for cluster config update ...
	I1217 08:24:39.445471  754490 start.go:256] writing updated cluster config ...
	I1217 08:24:39.445812  754490 ssh_runner.go:195] Run: rm -f paused
	I1217 08:24:39.450174  754490 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:24:39.451115  754490 kapi.go:59] client config for pause-262039: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/profiles/pause-262039/client.key", CAFile:"/home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 08:24:39.454353  754490 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sttv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.459039  754490 pod_ready.go:94] pod "coredns-66bc5c9577-sttv4" is "Ready"
	I1217 08:24:39.459071  754490 pod_ready.go:86] duration metric: took 4.688037ms for pod "coredns-66bc5c9577-sttv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.461549  754490 pod_ready.go:83] waiting for pod "etcd-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.466268  754490 pod_ready.go:94] pod "etcd-pause-262039" is "Ready"
	I1217 08:24:39.466294  754490 pod_ready.go:86] duration metric: took 4.720841ms for pod "etcd-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.468703  754490 pod_ready.go:83] waiting for pod "kube-apiserver-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.473300  754490 pod_ready.go:94] pod "kube-apiserver-pause-262039" is "Ready"
	I1217 08:24:39.473331  754490 pod_ready.go:86] duration metric: took 4.601309ms for pod "kube-apiserver-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:39.475485  754490 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:38.415699  753483 cli_runner.go:164] Run: docker container inspect missing-upgrade-442124 --format={{.State.Status}}
	W1217 08:24:38.437190  753483 cli_runner.go:211] docker container inspect missing-upgrade-442124 --format={{.State.Status}} returned with exit code 1
	I1217 08:24:38.437253  753483 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-442124": docker container inspect missing-upgrade-442124 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-442124
	I1217 08:24:38.437275  753483 oci.go:673] temporary error: container missing-upgrade-442124 status is  but expect it to be exited
	I1217 08:24:38.437318  753483 retry.go:31] will retry after 2.851991715s: couldn't verify container is exited. %v: unknown state "missing-upgrade-442124": docker container inspect missing-upgrade-442124 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-442124
	I1217 08:24:39.855053  754490 pod_ready.go:94] pod "kube-controller-manager-pause-262039" is "Ready"
	I1217 08:24:39.855086  754490 pod_ready.go:86] duration metric: took 379.575091ms for pod "kube-controller-manager-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:40.055578  754490 pod_ready.go:83] waiting for pod "kube-proxy-tqfbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:40.455331  754490 pod_ready.go:94] pod "kube-proxy-tqfbc" is "Ready"
	I1217 08:24:40.455367  754490 pod_ready.go:86] duration metric: took 399.760034ms for pod "kube-proxy-tqfbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:40.654625  754490 pod_ready.go:83] waiting for pod "kube-scheduler-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:41.055027  754490 pod_ready.go:94] pod "kube-scheduler-pause-262039" is "Ready"
	I1217 08:24:41.055063  754490 pod_ready.go:86] duration metric: took 400.406972ms for pod "kube-scheduler-pause-262039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:24:41.055078  754490 pod_ready.go:40] duration metric: took 1.604863415s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:24:41.102188  754490 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:24:41.241216  754490 out.go:179] * Done! kubectl is now configured to use "pause-262039" cluster and "default" namespace by default
	I1217 08:24:37.857658  755896 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 08:24:37.857920  755896 start.go:159] libmachine.API.Create for "kubernetes-upgrade-568559" (driver="docker")
	I1217 08:24:37.857974  755896 client.go:173] LocalClient.Create starting
	I1217 08:24:37.858044  755896 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 08:24:37.858078  755896 main.go:143] libmachine: Decoding PEM data...
	I1217 08:24:37.858100  755896 main.go:143] libmachine: Parsing certificate...
	I1217 08:24:37.858163  755896 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 08:24:37.858184  755896 main.go:143] libmachine: Decoding PEM data...
	I1217 08:24:37.858196  755896 main.go:143] libmachine: Parsing certificate...
	I1217 08:24:37.858620  755896 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-568559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 08:24:37.879276  755896 cli_runner.go:211] docker network inspect kubernetes-upgrade-568559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 08:24:37.879348  755896 network_create.go:284] running [docker network inspect kubernetes-upgrade-568559] to gather additional debugging logs...
	I1217 08:24:37.879366  755896 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-568559
	W1217 08:24:37.897970  755896 cli_runner.go:211] docker network inspect kubernetes-upgrade-568559 returned with exit code 1
	I1217 08:24:37.898017  755896 network_create.go:287] error running [docker network inspect kubernetes-upgrade-568559]: docker network inspect kubernetes-upgrade-568559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-568559 not found
	I1217 08:24:37.898035  755896 network_create.go:289] output of [docker network inspect kubernetes-upgrade-568559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-568559 not found
	
	** /stderr **
	I1217 08:24:37.898185  755896 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:24:37.917229  755896 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-971513c2879b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:b9:48:a1:bc:14} reservation:<nil>}
	I1217 08:24:37.917742  755896 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d3a8438f2b04 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:22:9a:90:c8:31} reservation:<nil>}
	I1217 08:24:37.918195  755896 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-270f10fabfc5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:f8:c6:e8:84:c2} reservation:<nil>}
	I1217 08:24:37.918820  755896 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d6b1425cabf4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:5b:37:e9:24:53} reservation:<nil>}
	I1217 08:24:37.919730  755896 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020620c0}
	I1217 08:24:37.919761  755896 network_create.go:124] attempt to create docker network kubernetes-upgrade-568559 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 08:24:37.919825  755896 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-568559 kubernetes-upgrade-568559
	I1217 08:24:37.976193  755896 network_create.go:108] docker network kubernetes-upgrade-568559 192.168.85.0/24 created
	I1217 08:24:37.976226  755896 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-568559" container
	I1217 08:24:37.976293  755896 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 08:24:37.996077  755896 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-568559 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-568559 --label created_by.minikube.sigs.k8s.io=true
	I1217 08:24:38.018604  755896 oci.go:103] Successfully created a docker volume kubernetes-upgrade-568559
	I1217 08:24:38.018775  755896 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-568559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-568559 --entrypoint /usr/bin/test -v kubernetes-upgrade-568559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 08:24:38.451801  755896 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-568559
	I1217 08:24:38.451900  755896 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 08:24:38.451919  755896 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 08:24:38.452010  755896 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-568559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 08:24:41.290921  753483 cli_runner.go:164] Run: docker container inspect missing-upgrade-442124 --format={{.State.Status}}
	W1217 08:24:41.308143  753483 cli_runner.go:211] docker container inspect missing-upgrade-442124 --format={{.State.Status}} returned with exit code 1
	I1217 08:24:41.308217  753483 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-442124": docker container inspect missing-upgrade-442124 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-442124
	I1217 08:24:41.308233  753483 oci.go:673] temporary error: container missing-upgrade-442124 status is  but expect it to be exited
	I1217 08:24:41.308275  753483 retry.go:31] will retry after 2.896631245s: couldn't verify container is exited. %v: unknown state "missing-upgrade-442124": docker container inspect missing-upgrade-442124 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-442124
	I1217 08:24:44.206687  753483 cli_runner.go:164] Run: docker container inspect missing-upgrade-442124 --format={{.State.Status}}
	W1217 08:24:44.231447  753483 cli_runner.go:211] docker container inspect missing-upgrade-442124 --format={{.State.Status}} returned with exit code 1
	I1217 08:24:44.231552  753483 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-442124": docker container inspect missing-upgrade-442124 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-442124
	I1217 08:24:44.231586  753483 oci.go:673] temporary error: container missing-upgrade-442124 status is  but expect it to be exited
	I1217 08:24:44.231634  753483 oci.go:88] couldn't shut down missing-upgrade-442124 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-442124": docker container inspect missing-upgrade-442124 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-442124
	 
	I1217 08:24:44.231704  753483 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-442124
	I1217 08:24:44.255219  753483 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-442124
	W1217 08:24:44.276192  753483 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-442124 returned with exit code 1
	I1217 08:24:44.276274  753483 cli_runner.go:164] Run: docker network inspect missing-upgrade-442124 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:24:44.301200  753483 cli_runner.go:164] Run: docker network rm missing-upgrade-442124
	I1217 08:24:44.417711  753483 fix.go:124] Sleeping 1 second for extra luck!
	I1217 08:24:45.418690  753483 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.87170698Z" level=info msg="RDT not available in the host system"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.871721921Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.872637622Z" level=info msg="Conmon does support the --sync option"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.872664334Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.87268259Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.873498436Z" level=info msg="Conmon does support the --sync option"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.873525304Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.879573687Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.879596182Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.880188601Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.880607986Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.880668431Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.970285374Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-sttv4 Namespace:kube-system ID:30cb85fe0df5bcd17b7f3b7e1dc9a53d83c2d136eec1926592862da3c7cbe9fd UID:016062e9-782c-4d30-b8c9-7792ef42b4c7 NetNS:/var/run/netns/9a3ea54b-8bb9-4f93-ba46-382490e8d8b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003c2198}] Aliases:map[]}"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.9705702Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-sttv4 for CNI network kindnet (type=ptp)"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971179611Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971204175Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971245341Z" level=info msg="Create NRI interface"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971368644Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971382906Z" level=info msg="runtime interface created"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971393209Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971398911Z" level=info msg="runtime interface starting up..."
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971403752Z" level=info msg="starting plugins..."
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971420902Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 08:24:37 pause-262039 crio[2229]: time="2025-12-17T08:24:37.971781038Z" level=info msg="No systemd watchdog enabled"
	Dec 17 08:24:37 pause-262039 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ee9a6c3deb88f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     14 seconds ago      Running             coredns                   0                   30cb85fe0df5b       coredns-66bc5c9577-sttv4               kube-system
	c5ee7a9c5e6d7       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   25 seconds ago      Running             kindnet-cni               0                   c5ae85618ebed       kindnet-jl97s                          kube-system
	5e70bbbc20e84       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     28 seconds ago      Running             kube-proxy                0                   5a2a85bf4cb1f       kube-proxy-tqfbc                       kube-system
	5783821a94e6d       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     39 seconds ago      Running             kube-scheduler            0                   40b22c24cb792       kube-scheduler-pause-262039            kube-system
	b0b03fe3ffed2       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     39 seconds ago      Running             kube-apiserver            0                   511881e3bede3       kube-apiserver-pause-262039            kube-system
	5d39484b93e47       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     39 seconds ago      Running             kube-controller-manager   0                   e166481d52398       kube-controller-manager-pause-262039   kube-system
	39a0211fc9fd6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     39 seconds ago      Running             etcd                      0                   07c7b0c966ea5       etcd-pause-262039                      kube-system
	
	
	==> coredns [ee9a6c3deb88fc6c6f1fa547ca51ad678a76cbaf113b0c24ddc7c6e6bbcfef3e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53359 - 11255 "HINFO IN 3028947443263452986.4233501016873813547. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.040755692s
	
	
	==> describe nodes <==
	Name:               pause-262039
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-262039
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=pause-262039
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_24_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:24:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-262039
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:24:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:24:31 +0000   Wed, 17 Dec 2025 08:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:24:31 +0000   Wed, 17 Dec 2025 08:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:24:31 +0000   Wed, 17 Dec 2025 08:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:24:31 +0000   Wed, 17 Dec 2025 08:24:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-262039
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                5fe93599-bb52-4bac-8aed-adc24791beca
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-sttv4                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-262039                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-jl97s                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-pause-262039             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-262039    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-tqfbc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-pause-262039             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node pause-262039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node pause-262039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node pause-262039 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node pause-262039 event: Registered Node pause-262039 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-262039 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 17 bb 9f 9a 4b 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 91 37 97 9f 01 08 06
	[Dec17 07:52] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.033977] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.024926] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.022908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +1.023867] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +2.047880] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +4.032673] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[  +8.190487] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[ +16.382857] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	
	
	==> etcd [39a0211fc9fd66fe9d1dd29d5baecd78571910f2c934785530cd4220e349a463] <==
	{"level":"warn","ts":"2025-12-17T08:24:09.093800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.121076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.133396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.167923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.223304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.233493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.245291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.257979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.270076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.284121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.298755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.313702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.327253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.338232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.356769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.381016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.397518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.415768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.446839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.453957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.466099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.485128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.496467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.506103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:09.576716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52426","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:24:46 up  2:07,  0 user,  load average: 4.47, 1.95, 1.65
	Linux pause-262039 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c5ee7a9c5e6d7304c42befa72f53f961be85f98800041c60c399ac5442413f5f] <==
	I1217 08:24:21.122331       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:24:21.214844       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 08:24:21.215044       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:24:21.215069       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:24:21.215102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:24:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:24:21.419031       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:24:21.419085       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:24:21.419096       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:24:21.419676       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:24:21.819841       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:24:21.819870       1 metrics.go:72] Registering metrics
	I1217 08:24:21.819910       1 controller.go:711] "Syncing nftables rules"
	I1217 08:24:31.421626       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:24:31.421720       1 main.go:301] handling current node
	I1217 08:24:41.426649       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:24:41.426693       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b0b03fe3ffed2a7add26a82beb0a0bd988bae05306d48b406f08a90114b6208c] <==
	I1217 08:24:10.183190       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 08:24:10.183759       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:24:10.183802       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1217 08:24:10.188694       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:24:10.188766       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 08:24:10.193964       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:24:10.194168       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 08:24:10.366351       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:24:11.082713       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 08:24:11.086826       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 08:24:11.086847       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:24:11.677828       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:24:11.735365       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:24:11.889718       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 08:24:11.897497       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1217 08:24:11.898859       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:24:11.903952       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:24:12.096601       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:24:12.869576       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:24:12.884588       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 08:24:12.894920       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 08:24:17.800688       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 08:24:18.006924       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:24:18.014581       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:24:18.202035       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5d39484b93e472fb309e715bd605ea96d70867f894116f752c5b1bc176aeaaa2] <==
	I1217 08:24:17.093954       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 08:24:17.094015       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:24:17.094030       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 08:24:17.094041       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 08:24:17.094202       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 08:24:17.094218       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 08:24:17.095416       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 08:24:17.095444       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 08:24:17.095521       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 08:24:17.095598       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 08:24:17.095648       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:24:17.095701       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 08:24:17.095714       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 08:24:17.095902       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 08:24:17.096047       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 08:24:17.096114       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 08:24:17.096216       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 08:24:17.097364       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 08:24:17.099652       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 08:24:17.100788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 08:24:17.100826       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 08:24:17.103126       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:24:17.104217       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:24:17.126693       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:24:32.048034       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5e70bbbc20e84d301f25ac30dd980098a7720809951f862fa94c56ccfb056e41] <==
	I1217 08:24:18.277492       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:24:18.359226       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:24:18.459740       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:24:18.459786       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 08:24:18.459908       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:24:18.515101       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:24:18.515236       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:24:18.527413       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:24:18.528016       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:24:18.528077       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:24:18.530113       1 config.go:200] "Starting service config controller"
	I1217 08:24:18.530176       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:24:18.530218       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:24:18.530224       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:24:18.530237       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:24:18.530242       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:24:18.530647       1 config.go:309] "Starting node config controller"
	I1217 08:24:18.530658       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:24:18.530665       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:24:18.631634       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:24:18.631745       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:24:18.631729       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5783821a94e6d85f4b199db1ccfbb591ab533b02e6f538a920a32615cc308417] <==
	E1217 08:24:10.144751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 08:24:10.144780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 08:24:10.144815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 08:24:10.144903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 08:24:10.144968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 08:24:10.145048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 08:24:10.145061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 08:24:10.145152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 08:24:10.145288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 08:24:10.146250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 08:24:10.148870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 08:24:10.949555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 08:24:11.087177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 08:24:11.093449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 08:24:11.094300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 08:24:11.102859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 08:24:11.109217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 08:24:11.182860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 08:24:11.203035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 08:24:11.233181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 08:24:11.284824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 08:24:11.285641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 08:24:11.318268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 08:24:11.356010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1217 08:24:13.039785       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:24:13 pause-262039 kubelet[1315]: I1217 08:24:13.884687    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-262039" podStartSLOduration=1.884671815 podStartE2EDuration="1.884671815s" podCreationTimestamp="2025-12-17 08:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:24:13.884299502 +0000 UTC m=+1.266419698" watchObservedRunningTime="2025-12-17 08:24:13.884671815 +0000 UTC m=+1.266792011"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.083305    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.084012    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.837264    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf608927-6659-4165-8793-2f3df58e1282-kube-proxy\") pod \"kube-proxy-tqfbc\" (UID: \"cf608927-6659-4165-8793-2f3df58e1282\") " pod="kube-system/kube-proxy-tqfbc"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.837312    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf608927-6659-4165-8793-2f3df58e1282-xtables-lock\") pod \"kube-proxy-tqfbc\" (UID: \"cf608927-6659-4165-8793-2f3df58e1282\") " pod="kube-system/kube-proxy-tqfbc"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.837337    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bszlr\" (UniqueName: \"kubernetes.io/projected/cf608927-6659-4165-8793-2f3df58e1282-kube-api-access-bszlr\") pod \"kube-proxy-tqfbc\" (UID: \"cf608927-6659-4165-8793-2f3df58e1282\") " pod="kube-system/kube-proxy-tqfbc"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.837366    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf608927-6659-4165-8793-2f3df58e1282-lib-modules\") pod \"kube-proxy-tqfbc\" (UID: \"cf608927-6659-4165-8793-2f3df58e1282\") " pod="kube-system/kube-proxy-tqfbc"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.938044    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td6lx\" (UniqueName: \"kubernetes.io/projected/1543fcb7-2037-4f57-8878-b172586434df-kube-api-access-td6lx\") pod \"kindnet-jl97s\" (UID: \"1543fcb7-2037-4f57-8878-b172586434df\") " pod="kube-system/kindnet-jl97s"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.938797    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1543fcb7-2037-4f57-8878-b172586434df-cni-cfg\") pod \"kindnet-jl97s\" (UID: \"1543fcb7-2037-4f57-8878-b172586434df\") " pod="kube-system/kindnet-jl97s"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.938833    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1543fcb7-2037-4f57-8878-b172586434df-lib-modules\") pod \"kindnet-jl97s\" (UID: \"1543fcb7-2037-4f57-8878-b172586434df\") " pod="kube-system/kindnet-jl97s"
	Dec 17 08:24:17 pause-262039 kubelet[1315]: I1217 08:24:17.938868    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1543fcb7-2037-4f57-8878-b172586434df-xtables-lock\") pod \"kindnet-jl97s\" (UID: \"1543fcb7-2037-4f57-8878-b172586434df\") " pod="kube-system/kindnet-jl97s"
	Dec 17 08:24:21 pause-262039 kubelet[1315]: I1217 08:24:21.804414    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tqfbc" podStartSLOduration=4.804388231 podStartE2EDuration="4.804388231s" podCreationTimestamp="2025-12-17 08:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:24:18.862822591 +0000 UTC m=+6.244942786" watchObservedRunningTime="2025-12-17 08:24:21.804388231 +0000 UTC m=+9.186508426"
	Dec 17 08:24:21 pause-262039 kubelet[1315]: I1217 08:24:21.804520    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jl97s" podStartSLOduration=2.069317733 podStartE2EDuration="4.804515959s" podCreationTimestamp="2025-12-17 08:24:17 +0000 UTC" firstStartedPulling="2025-12-17 08:24:18.150207731 +0000 UTC m=+5.532327919" lastFinishedPulling="2025-12-17 08:24:20.885405955 +0000 UTC m=+8.267526145" observedRunningTime="2025-12-17 08:24:21.804193235 +0000 UTC m=+9.186313453" watchObservedRunningTime="2025-12-17 08:24:21.804515959 +0000 UTC m=+9.186636153"
	Dec 17 08:24:31 pause-262039 kubelet[1315]: I1217 08:24:31.566280    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 08:24:31 pause-262039 kubelet[1315]: I1217 08:24:31.642702    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfttf\" (UniqueName: \"kubernetes.io/projected/016062e9-782c-4d30-b8c9-7792ef42b4c7-kube-api-access-bfttf\") pod \"coredns-66bc5c9577-sttv4\" (UID: \"016062e9-782c-4d30-b8c9-7792ef42b4c7\") " pod="kube-system/coredns-66bc5c9577-sttv4"
	Dec 17 08:24:31 pause-262039 kubelet[1315]: I1217 08:24:31.642764    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/016062e9-782c-4d30-b8c9-7792ef42b4c7-config-volume\") pod \"coredns-66bc5c9577-sttv4\" (UID: \"016062e9-782c-4d30-b8c9-7792ef42b4c7\") " pod="kube-system/coredns-66bc5c9577-sttv4"
	Dec 17 08:24:32 pause-262039 kubelet[1315]: I1217 08:24:32.852519    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sttv4" podStartSLOduration=14.852492179 podStartE2EDuration="14.852492179s" podCreationTimestamp="2025-12-17 08:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:24:32.838367625 +0000 UTC m=+20.220487832" watchObservedRunningTime="2025-12-17 08:24:32.852492179 +0000 UTC m=+20.234612374"
	Dec 17 08:24:37 pause-262039 kubelet[1315]: W1217 08:24:37.835287    1315 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 08:24:37 pause-262039 kubelet[1315]: E1217 08:24:37.835419    1315 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 08:24:37 pause-262039 kubelet[1315]: E1217 08:24:37.835478    1315 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 08:24:37 pause-262039 kubelet[1315]: E1217 08:24:37.835495    1315 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 08:24:41 pause-262039 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:24:41 pause-262039 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:24:41 pause-262039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 08:24:41 pause-262039 systemd[1]: kubelet.service: Consumed 1.328s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-262039 -n pause-262039
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-262039 -n pause-262039: exit status 2 (354.317185ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-262039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (267.287144ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:32:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-640910 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-640910 describe deploy/metrics-server -n kube-system: exit status 1 (70.251401ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-640910 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-640910
helpers_test.go:244: (dbg) docker inspect old-k8s-version-640910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265",
	        "Created": "2025-12-17T08:31:29.610221474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 864365,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:31:29.67702371Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/hostname",
	        "HostsPath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/hosts",
	        "LogPath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265-json.log",
	        "Name": "/old-k8s-version-640910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-640910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-640910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265",
	                "LowerDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-640910",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-640910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-640910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-640910",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-640910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "200b77fd4ec527ff094ef73d393fb707e9e33102e54d98ab713b1ec8aae53a63",
	            "SandboxKey": "/var/run/docker/netns/200b77fd4ec5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-640910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b355f632d1e424bfa46e67475c4907bc9f9b97c58ca4b258317e871521160531",
	                    "EndpointID": "5fd3d897eec6528befa69af569b19cd7c99d905093886051c34d035a4e8521d7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "16:5d:d7:f7:b0:bd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-640910",
	                        "2054167e9d36"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-640910 -n old-k8s-version-640910
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-640910 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-640910 logs -n 25: (1.446022871s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-055130 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo docker system info                                                                                                                                 │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cri-dockerd --version                                                                                                                              │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo containerd config dump                                                                                                                             │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo crio config                                                                                                                                        │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ delete  │ -p bridge-055130                                                                                                                                                         │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-606497                                                                                                                                          │ disable-driver-mounts-606497 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:32:01
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:32:01.552734  876818 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:32:01.553099  876818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:32:01.553114  876818 out.go:374] Setting ErrFile to fd 2...
	I1217 08:32:01.553121  876818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:32:01.553340  876818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:32:01.553902  876818 out.go:368] Setting JSON to false
	I1217 08:32:01.555210  876818 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8067,"bootTime":1765952255,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:32:01.555284  876818 start.go:143] virtualization: kvm guest
	I1217 08:32:01.558242  876818 out.go:179] * [default-k8s-diff-port-225657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:32:01.561313  876818 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:32:01.561325  876818 notify.go:221] Checking for updates...
	I1217 08:32:01.568510  876818 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:32:01.571884  876818 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:01.574245  876818 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:32:01.576734  876818 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:32:01.578873  876818 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:32:01.581914  876818 config.go:182] Loaded profile config "embed-certs-581631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:01.582052  876818 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:32:01.582137  876818 config.go:182] Loaded profile config "old-k8s-version-640910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 08:32:01.582248  876818 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:32:01.612172  876818 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:32:01.612311  876818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:32:01.684785  876818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-17 08:32:01.672949118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:32:01.684957  876818 docker.go:319] overlay module found
	I1217 08:32:01.687104  876818 out.go:179] * Using the docker driver based on user configuration
	I1217 08:32:01.688739  876818 start.go:309] selected driver: docker
	I1217 08:32:01.688762  876818 start.go:927] validating driver "docker" against <nil>
	I1217 08:32:01.688779  876818 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:32:01.689371  876818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:32:01.761694  876818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-17 08:32:01.749436813 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:32:01.761852  876818 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 08:32:01.762082  876818 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:01.764368  876818 out.go:179] * Using Docker driver with root privileges
	I1217 08:32:01.766035  876818 cni.go:84] Creating CNI manager for ""
	I1217 08:32:01.766129  876818 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:01.766145  876818 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 08:32:01.766238  876818 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:01.768170  876818 out.go:179] * Starting "default-k8s-diff-port-225657" primary control-plane node in "default-k8s-diff-port-225657" cluster
	I1217 08:32:01.769863  876818 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:32:01.772343  876818 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:32:01.774131  876818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:32:01.774188  876818 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 08:32:01.774206  876818 cache.go:65] Caching tarball of preloaded images
	I1217 08:32:01.774253  876818 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:32:01.774340  876818 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:32:01.774359  876818 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 08:32:01.774581  876818 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:32:01.774623  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json: {Name:mkdc1e498a413d8c47a4c9161b8ddc9e11834a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:01.803235  876818 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:32:01.803269  876818 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:32:01.803295  876818 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:32:01.803341  876818 start.go:360] acquireMachinesLock for default-k8s-diff-port-225657: {Name:mkf524609fef75b896bc809c6c5673b68f778ced Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:32:01.803497  876818 start.go:364] duration metric: took 133.382µs to acquireMachinesLock for "default-k8s-diff-port-225657"
	I1217 08:32:01.803569  876818 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:01.803675  876818 start.go:125] createHost starting for "" (driver="docker")
	I1217 08:31:59.471510  866074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:31:59.487104  866074 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1217 08:31:59.492193  866074 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1217 08:31:59.492241  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (58110244 bytes)
	I1217 08:32:01.990912  866074 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1217 08:32:02.003508  866074 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1217 08:32:02.003588  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (72368312 bytes)
	I1217 08:32:02.288548  866074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:32:02.298803  866074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 08:32:02.315378  866074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 08:32:02.402911  866074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 08:32:02.421212  866074 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:32:02.426364  866074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:32:02.442236  866074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:02.553459  866074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:02.590063  866074 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988 for IP: 192.168.94.2
	I1217 08:32:02.590092  866074 certs.go:195] generating shared ca certs ...
	I1217 08:32:02.590113  866074 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.590330  866074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:32:02.590413  866074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:32:02.590429  866074 certs.go:257] generating profile certs ...
	I1217 08:32:02.590514  866074 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.key
	I1217 08:32:02.590544  866074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.crt with IP's: []
	I1217 08:32:02.636814  866074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.crt ...
	I1217 08:32:02.636860  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.crt: {Name:mkc8d6c44408b047376e6be421e3c93768af7dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.637104  866074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.key ...
	I1217 08:32:02.637126  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.key: {Name:mk23aabb5dd35dc4380024377e6eece268d19273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.637255  866074 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be
	I1217 08:32:02.637279  866074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 08:31:57.930133  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:58.430261  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:58.930566  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:59.429668  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:59.929814  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:00.430337  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:00.930517  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.430253  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.929494  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.430181  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.930157  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.041132  860032 kubeadm.go:1114] duration metric: took 12.777197998s to wait for elevateKubeSystemPrivileges
	I1217 08:32:03.041172  860032 kubeadm.go:403] duration metric: took 25.06139908s to StartCluster
	I1217 08:32:03.041194  860032 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.041275  860032 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:03.042238  860032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.042571  860032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:03.042571  860032 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:03.042772  860032 config.go:182] Loaded profile config "old-k8s-version-640910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 08:32:03.042598  860032 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:03.042829  860032 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-640910"
	I1217 08:32:03.042846  860032 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-640910"
	I1217 08:32:03.042873  860032 host.go:66] Checking if "old-k8s-version-640910" exists ...
	I1217 08:32:03.043189  860032 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-640910"
	I1217 08:32:03.043227  860032 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-640910"
	I1217 08:32:03.043387  860032 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:32:03.043604  860032 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:32:03.044941  860032 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:03.047619  860032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:03.077628  860032 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:03.079571  860032 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:03.079600  860032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:03.079664  860032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:03.079881  860032 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-640910"
	I1217 08:32:03.079930  860032 host.go:66] Checking if "old-k8s-version-640910" exists ...
	I1217 08:32:03.080421  860032 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:32:03.115572  860032 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:03.115595  860032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:03.115604  860032 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:03.115657  860032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:03.149311  860032 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:03.198402  860032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:03.247949  860032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:03.263689  860032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:03.280464  860032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:03.580999  860032 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:03.582028  860032 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-640910" to be "Ready" ...
	I1217 08:32:03.834067  860032 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 08:32:00.400233  866708 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:32:00.406292  866708 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 08:32:00.406321  866708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:32:00.424039  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:32:00.743784  866708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:32:00.743917  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:00.743934  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-581631 minikube.k8s.io/updated_at=2025_12_17T08_32_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=embed-certs-581631 minikube.k8s.io/primary=true
	I1217 08:32:00.845521  866708 ops.go:34] apiserver oom_adj: -16
	I1217 08:32:00.845595  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.345810  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.845712  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.345788  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.846718  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.345718  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.845894  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.935808  866708 kubeadm.go:1114] duration metric: took 3.191972569s to wait for elevateKubeSystemPrivileges
	I1217 08:32:03.935854  866708 kubeadm.go:403] duration metric: took 16.523773394s to StartCluster
	I1217 08:32:03.935872  866708 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.935942  866708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:03.937291  866708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.937548  866708 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:03.937670  866708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:03.937680  866708 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:03.937783  866708 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-581631"
	I1217 08:32:03.937801  866708 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-581631"
	I1217 08:32:03.937806  866708 addons.go:70] Setting default-storageclass=true in profile "embed-certs-581631"
	I1217 08:32:03.937828  866708 config.go:182] Loaded profile config "embed-certs-581631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:03.937836  866708 host.go:66] Checking if "embed-certs-581631" exists ...
	I1217 08:32:03.937842  866708 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-581631"
	I1217 08:32:03.938130  866708 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:03.938357  866708 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:03.941811  866708 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:03.943970  866708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:03.964732  866708 addons.go:239] Setting addon default-storageclass=true in "embed-certs-581631"
	I1217 08:32:03.964785  866708 host.go:66] Checking if "embed-certs-581631" exists ...
	I1217 08:32:03.965299  866708 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:03.969098  866708 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:03.970610  866708 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:03.970635  866708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:03.970704  866708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:03.995425  866708 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:03.995462  866708 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:03.995547  866708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:04.006698  866708 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:32:04.031285  866708 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:32:04.065134  866708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:02.824596  866074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be ...
	I1217 08:32:02.824631  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be: {Name:mk45976aa0955a0afc1e8d64278dff519aaa2454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.824859  866074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be ...
	I1217 08:32:02.824886  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be: {Name:mk2dae5a961985112e8e9209c523ebf3ce607cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.825034  866074 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt
	I1217 08:32:02.825138  866074 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key
	I1217 08:32:02.825220  866074 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key
	I1217 08:32:02.825243  866074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt with IP's: []
	I1217 08:32:02.924760  866074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt ...
	I1217 08:32:02.924794  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt: {Name:mk267cedf76a400096972e8a1d55b0ea70195e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.925012  866074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key ...
	I1217 08:32:02.925034  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key: {Name:mkcad6ea1b15d8213d3a172ca1538446ff01dcfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.925290  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:32:02.925355  866074 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:32:02.925366  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:32:02.925400  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:32:02.925435  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:32:02.925467  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:32:02.925552  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:02.926601  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:32:02.955081  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:32:02.999049  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:32:03.023623  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:32:03.048610  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 08:32:03.080188  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:32:03.113501  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:32:03.144381  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:32:03.177658  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:32:03.213764  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:32:03.241889  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:32:03.273169  866074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:32:03.300818  866074 ssh_runner.go:195] Run: openssl version
	I1217 08:32:03.311109  866074 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.324197  866074 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:32:03.346933  866074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.353468  866074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.353573  866074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.417221  866074 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:03.429614  866074 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5560552.pem /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:03.442003  866074 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.455221  866074 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:32:03.469042  866074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.477276  866074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.477361  866074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.534695  866074 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:32:03.547563  866074 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 08:32:03.557640  866074 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.571257  866074 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:32:03.585135  866074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.591025  866074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.591099  866074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.646989  866074 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:32:03.662879  866074 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/556055.pem /etc/ssl/certs/51391683.0
	I1217 08:32:03.679566  866074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:32:03.687395  866074 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 08:32:03.687462  866074 kubeadm.go:401] StartCluster: {Name:no-preload-936988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-936988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:03.687578  866074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:32:03.687637  866074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:32:03.727391  866074 cri.go:89] found id: ""
	I1217 08:32:03.727501  866074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:32:03.738224  866074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 08:32:03.748723  866074 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 08:32:03.748793  866074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 08:32:03.760841  866074 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 08:32:03.760866  866074 kubeadm.go:158] found existing configuration files:
	
	I1217 08:32:03.760920  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 08:32:03.772427  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 08:32:03.772500  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 08:32:03.783020  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 08:32:03.794743  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 08:32:03.794817  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 08:32:03.805322  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 08:32:03.817490  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 08:32:03.817564  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 08:32:03.831785  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 08:32:03.843542  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 08:32:03.843616  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 08:32:03.853047  866074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 08:32:03.899091  866074 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 08:32:03.899195  866074 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 08:32:04.004708  866074 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 08:32:04.004803  866074 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 08:32:04.004848  866074 kubeadm.go:319] OS: Linux
	I1217 08:32:04.004909  866074 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 08:32:04.004973  866074 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 08:32:04.005038  866074 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 08:32:04.006028  866074 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 08:32:04.006112  866074 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 08:32:04.006175  866074 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 08:32:04.006240  866074 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 08:32:04.006308  866074 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 08:32:04.119188  866074 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 08:32:04.119332  866074 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 08:32:04.119474  866074 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 08:32:04.144669  866074 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 08:32:04.132626  866708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:04.136786  866708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:04.152680  866708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:04.291998  866708 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:04.293620  866708 node_ready.go:35] waiting up to 6m0s for node "embed-certs-581631" to be "Ready" ...
	I1217 08:32:04.514479  866708 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 08:32:04.149161  866074 out.go:252]   - Generating certificates and keys ...
	I1217 08:32:04.149271  866074 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 08:32:04.149357  866074 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 08:32:04.345146  866074 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 08:32:04.456420  866074 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 08:32:04.569867  866074 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 08:32:04.769981  866074 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 08:32:04.962017  866074 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 08:32:04.962211  866074 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-936988] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 08:32:05.189992  866074 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 08:32:05.190862  866074 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-936988] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 08:32:05.314135  866074 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 08:32:05.436298  866074 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 08:32:05.639248  866074 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 08:32:05.639451  866074 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 08:32:05.799909  866074 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 08:32:05.903137  866074 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 08:32:06.294633  866074 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 08:32:06.421349  866074 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 08:32:06.498721  866074 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 08:32:06.499367  866074 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 08:32:06.544114  866074 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 08:32:01.806337  876818 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 08:32:01.806789  876818 start.go:159] libmachine.API.Create for "default-k8s-diff-port-225657" (driver="docker")
	I1217 08:32:01.806841  876818 client.go:173] LocalClient.Create starting
	I1217 08:32:01.806928  876818 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 08:32:01.806973  876818 main.go:143] libmachine: Decoding PEM data...
	I1217 08:32:01.807004  876818 main.go:143] libmachine: Parsing certificate...
	I1217 08:32:01.807100  876818 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 08:32:01.807134  876818 main.go:143] libmachine: Decoding PEM data...
	I1217 08:32:01.807156  876818 main.go:143] libmachine: Parsing certificate...
	I1217 08:32:01.807598  876818 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 08:32:01.828194  876818 cli_runner.go:211] docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 08:32:01.828308  876818 network_create.go:284] running [docker network inspect default-k8s-diff-port-225657] to gather additional debugging logs...
	I1217 08:32:01.828345  876818 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657
	W1217 08:32:01.849757  876818 cli_runner.go:211] docker network inspect default-k8s-diff-port-225657 returned with exit code 1
	I1217 08:32:01.849798  876818 network_create.go:287] error running [docker network inspect default-k8s-diff-port-225657]: docker network inspect default-k8s-diff-port-225657: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-225657 not found
	I1217 08:32:01.849822  876818 network_create.go:289] output of [docker network inspect default-k8s-diff-port-225657]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-225657 not found
	
	** /stderr **
	I1217 08:32:01.849945  876818 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:32:01.874361  876818 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-971513c2879b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:b9:48:a1:bc:14} reservation:<nil>}
	I1217 08:32:01.875036  876818 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d3a8438f2b04 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:22:9a:90:c8:31} reservation:<nil>}
	I1217 08:32:01.875878  876818 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-270f10fabfc5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:f8:c6:e8:84:c2} reservation:<nil>}
	I1217 08:32:01.876831  876818 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e1180462b720 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c6:c6:ea:2d:3c:aa} reservation:<nil>}
	I1217 08:32:01.877453  876818 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b355f632d1e4 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:2c:e1:34:c1:34} reservation:<nil>}
	I1217 08:32:01.878105  876818 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-31552e72b7c3 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:c2:be:20:58:f7:57} reservation:<nil>}
	I1217 08:32:01.879300  876818 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020fbec0}
	I1217 08:32:01.879341  876818 network_create.go:124] attempt to create docker network default-k8s-diff-port-225657 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1217 08:32:01.879423  876818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 default-k8s-diff-port-225657
	I1217 08:32:01.960561  876818 network_create.go:108] docker network default-k8s-diff-port-225657 192.168.103.0/24 created
	I1217 08:32:01.960599  876818 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-225657" container
	I1217 08:32:01.960690  876818 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 08:32:01.985847  876818 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-225657 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --label created_by.minikube.sigs.k8s.io=true
	I1217 08:32:02.021946  876818 oci.go:103] Successfully created a docker volume default-k8s-diff-port-225657
	I1217 08:32:02.022045  876818 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-225657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --entrypoint /usr/bin/test -v default-k8s-diff-port-225657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 08:32:02.718937  876818 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-225657
	I1217 08:32:02.719022  876818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:32:02.719035  876818 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 08:32:02.719125  876818 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-225657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 08:32:06.616828  866074 out.go:252]   - Booting up control plane ...
	I1217 08:32:06.617019  866074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 08:32:06.617189  866074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 08:32:06.617313  866074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 08:32:06.617525  866074 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 08:32:06.617836  866074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 08:32:06.618011  866074 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 08:32:06.618170  866074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 08:32:06.618229  866074 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 08:32:06.755893  866074 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 08:32:06.756060  866074 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 08:32:07.756781  866074 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001037016s
	I1217 08:32:07.760245  866074 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 08:32:07.760395  866074 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1217 08:32:07.760553  866074 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 08:32:07.760691  866074 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 08:32:03.835993  860032 addons.go:530] duration metric: took 793.383913ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:32:04.092564  860032 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-640910" context rescaled to 1 replicas
	W1217 08:32:05.590407  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	I1217 08:32:04.516501  866708 addons.go:530] duration metric: took 578.822881ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:32:04.798269  866708 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-581631" context rescaled to 1 replicas
	W1217 08:32:06.297588  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	W1217 08:32:08.297713  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:07.917031  876818 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-225657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (5.197831615s)
	I1217 08:32:07.917065  876818 kic.go:203] duration metric: took 5.198025236s to extract preloaded images to volume ...
	W1217 08:32:07.917162  876818 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 08:32:07.917207  876818 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 08:32:07.917258  876818 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 08:32:07.988840  876818 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-225657 --name default-k8s-diff-port-225657 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --network default-k8s-diff-port-225657 --ip 192.168.103.2 --volume default-k8s-diff-port-225657:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 08:32:08.400242  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Running}}
	I1217 08:32:08.424855  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:08.450896  876818 cli_runner.go:164] Run: docker exec default-k8s-diff-port-225657 stat /var/lib/dpkg/alternatives/iptables
	I1217 08:32:08.523021  876818 oci.go:144] the created container "default-k8s-diff-port-225657" has a running status.
	I1217 08:32:08.523088  876818 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519...
	I1217 08:32:08.525770  876818 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 08:32:08.560942  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:08.585171  876818 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 08:32:08.585195  876818 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-225657 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 08:32:08.651792  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:08.677101  876818 machine.go:94] provisionDockerMachine start ...
	I1217 08:32:08.677481  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:08.707459  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:08.707676  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:08.707703  876818 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:32:08.708734  876818 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57932->127.0.0.1:33510: read: connection reset by peer
	I1217 08:32:08.765985  866074 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005397586s
	I1217 08:32:09.803829  866074 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.0435496s
	I1217 08:32:11.763049  866074 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002842384s
	I1217 08:32:11.782622  866074 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 08:32:11.796483  866074 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 08:32:11.808857  866074 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 08:32:11.809099  866074 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-936988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 08:32:11.818163  866074 kubeadm.go:319] [bootstrap-token] Using token: 7nqi1p.ejost2d3dqegwn4g
	I1217 08:32:11.819926  866074 out.go:252]   - Configuring RBAC rules ...
	I1217 08:32:11.820101  866074 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 08:32:11.823946  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 08:32:11.832263  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 08:32:11.835817  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 08:32:11.838856  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 08:32:11.842484  866074 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 08:32:12.169848  866074 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 08:32:12.589615  866074 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	W1217 08:32:08.088510  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	W1217 08:32:10.585565  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	W1217 08:32:12.585741  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	I1217 08:32:13.169675  866074 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 08:32:13.171934  866074 kubeadm.go:319] 
	I1217 08:32:13.172034  866074 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 08:32:13.172045  866074 kubeadm.go:319] 
	I1217 08:32:13.172161  866074 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 08:32:13.172172  866074 kubeadm.go:319] 
	I1217 08:32:13.172200  866074 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 08:32:13.172277  866074 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 08:32:13.172344  866074 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 08:32:13.172355  866074 kubeadm.go:319] 
	I1217 08:32:13.172415  866074 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 08:32:13.172425  866074 kubeadm.go:319] 
	I1217 08:32:13.172481  866074 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 08:32:13.172494  866074 kubeadm.go:319] 
	I1217 08:32:13.172584  866074 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 08:32:13.172726  866074 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 08:32:13.172821  866074 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 08:32:13.172831  866074 kubeadm.go:319] 
	I1217 08:32:13.172934  866074 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 08:32:13.173027  866074 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 08:32:13.173036  866074 kubeadm.go:319] 
	I1217 08:32:13.173135  866074 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7nqi1p.ejost2d3dqegwn4g \
	I1217 08:32:13.173265  866074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 08:32:13.173294  866074 kubeadm.go:319] 	--control-plane 
	I1217 08:32:13.173303  866074 kubeadm.go:319] 
	I1217 08:32:13.173408  866074 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 08:32:13.173418  866074 kubeadm.go:319] 
	I1217 08:32:13.173517  866074 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7nqi1p.ejost2d3dqegwn4g \
	I1217 08:32:13.173666  866074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 08:32:13.177005  866074 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 08:32:13.177121  866074 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 08:32:13.177175  866074 cni.go:84] Creating CNI manager for ""
	I1217 08:32:13.177196  866074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:13.179613  866074 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1217 08:32:10.797040  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	W1217 08:32:13.297857  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:11.847591  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:32:11.847629  876818 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-225657"
	I1217 08:32:11.847703  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:11.870068  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:11.870172  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:11.870184  876818 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-225657 && echo "default-k8s-diff-port-225657" | sudo tee /etc/hostname
	I1217 08:32:12.017902  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:32:12.017995  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.040970  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:12.041124  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:12.041148  876818 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-225657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-225657/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-225657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:32:12.174812  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:32:12.174846  876818 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:32:12.174878  876818 ubuntu.go:190] setting up certificates
	I1217 08:32:12.174891  876818 provision.go:84] configureAuth start
	I1217 08:32:12.174961  876818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:32:12.195929  876818 provision.go:143] copyHostCerts
	I1217 08:32:12.196007  876818 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:32:12.196020  876818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:32:12.196106  876818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:32:12.196259  876818 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:32:12.196274  876818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:32:12.196320  876818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:32:12.196402  876818 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:32:12.196413  876818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:32:12.196438  876818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:32:12.196495  876818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-225657 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-225657 localhost minikube]
	I1217 08:32:12.298236  876818 provision.go:177] copyRemoteCerts
	I1217 08:32:12.298295  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:32:12.298335  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.318951  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:12.424332  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:32:12.450112  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 08:32:12.470525  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 08:32:12.491813  876818 provision.go:87] duration metric: took 316.905148ms to configureAuth
	I1217 08:32:12.491849  876818 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:32:12.492046  876818 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:12.492151  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.513001  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:12.513125  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:12.513141  876818 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:32:12.803327  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:32:12.803363  876818 machine.go:97] duration metric: took 4.126112041s to provisionDockerMachine
	I1217 08:32:12.803378  876818 client.go:176] duration metric: took 10.996527369s to LocalClient.Create
	I1217 08:32:12.803405  876818 start.go:167] duration metric: took 10.99661651s to libmachine.API.Create "default-k8s-diff-port-225657"
	I1217 08:32:12.803414  876818 start.go:293] postStartSetup for "default-k8s-diff-port-225657" (driver="docker")
	I1217 08:32:12.803428  876818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:32:12.803520  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:32:12.803590  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.822159  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:12.925471  876818 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:32:12.929675  876818 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:32:12.929714  876818 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:32:12.929734  876818 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:32:12.929814  876818 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:32:12.929919  876818 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:32:12.930052  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:32:12.938904  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:12.961125  876818 start.go:296] duration metric: took 157.693442ms for postStartSetup
	I1217 08:32:12.961555  876818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:32:12.982070  876818 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:32:12.982402  876818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:32:12.982460  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:13.002877  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:13.095087  876818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:32:13.100174  876818 start.go:128] duration metric: took 11.296476774s to createHost
	I1217 08:32:13.100209  876818 start.go:83] releasing machines lock for "default-k8s-diff-port-225657", held for 11.296696714s
	I1217 08:32:13.100279  876818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:32:13.119195  876818 ssh_runner.go:195] Run: cat /version.json
	I1217 08:32:13.119271  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:13.119274  876818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:32:13.119342  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:13.139794  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:13.140091  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:13.292825  876818 ssh_runner.go:195] Run: systemctl --version
	I1217 08:32:13.301062  876818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:32:13.347657  876818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:32:13.353086  876818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:32:13.353180  876818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:32:13.386293  876818 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 08:32:13.386324  876818 start.go:496] detecting cgroup driver to use...
	I1217 08:32:13.386363  876818 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:32:13.386440  876818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:32:13.406165  876818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:32:13.421667  876818 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:32:13.421735  876818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:32:13.445063  876818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:32:13.474069  876818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:32:13.589514  876818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:32:13.687876  876818 docker.go:234] disabling docker service ...
	I1217 08:32:13.687948  876818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:32:13.709115  876818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:32:13.725179  876818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:32:13.816070  876818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:32:13.908965  876818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:32:13.922931  876818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:32:13.938488  876818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:32:13.938601  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.949886  876818 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:32:13.949966  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.959623  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.969563  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.980342  876818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:32:13.989685  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.999720  876818 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:14.014863  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:14.024968  876818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:32:14.033477  876818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:32:14.041958  876818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:14.130836  876818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:32:14.324161  876818 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:32:14.324240  876818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:32:14.328783  876818 start.go:564] Will wait 60s for crictl version
	I1217 08:32:14.328842  876818 ssh_runner.go:195] Run: which crictl
	I1217 08:32:14.332732  876818 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:32:14.358741  876818 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:32:14.358828  876818 ssh_runner.go:195] Run: crio --version
	I1217 08:32:14.389865  876818 ssh_runner.go:195] Run: crio --version
	I1217 08:32:14.421345  876818 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 08:32:14.423125  876818 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:32:14.443156  876818 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 08:32:14.448782  876818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:32:14.461614  876818 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:32:14.461796  876818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:32:14.461847  876818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:32:14.497773  876818 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:32:14.497797  876818 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:32:14.497850  876818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:32:14.528137  876818 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:32:14.528160  876818 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:32:14.528168  876818 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1217 08:32:14.528254  876818 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-225657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:32:14.528318  876818 ssh_runner.go:195] Run: crio config
	I1217 08:32:14.584472  876818 cni.go:84] Creating CNI manager for ""
	I1217 08:32:14.584502  876818 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:14.584524  876818 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:32:14.584583  876818 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-225657 NodeName:default-k8s-diff-port-225657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:32:14.584763  876818 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-225657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:32:14.584847  876818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 08:32:14.594854  876818 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:32:14.594919  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:32:14.605478  876818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1217 08:32:14.621822  876818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:32:14.641660  876818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 08:32:14.656519  876818 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:32:14.660626  876818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:32:14.672783  876818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:14.763211  876818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:14.795518  876818 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657 for IP: 192.168.103.2
	I1217 08:32:14.795574  876818 certs.go:195] generating shared ca certs ...
	I1217 08:32:14.795596  876818 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.795767  876818 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:32:14.795826  876818 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:32:14.795840  876818 certs.go:257] generating profile certs ...
	I1217 08:32:14.795954  876818 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key
	I1217 08:32:14.795977  876818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.crt with IP's: []
	I1217 08:32:14.863228  876818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.crt ...
	I1217 08:32:14.863262  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.crt: {Name:mkdcfa20690e66f7711fa7eedb1c17f0013cea3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.863459  876818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key ...
	I1217 08:32:14.863479  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key: {Name:mk0c147f99dbcd9cd0b76dd50dbcc7358fb09eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.863633  876818 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92
	I1217 08:32:14.863658  876818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 08:32:14.926506  876818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92 ...
	I1217 08:32:14.926559  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92: {Name:mkeab2e9787f4fdc822d05ef2a5a31d73807e7a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.926783  876818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92 ...
	I1217 08:32:14.926807  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92: {Name:mk908b8eefd79aa9fd3e47b0e9dd700056cd3a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.926928  876818 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt
	I1217 08:32:14.927054  876818 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key
	I1217 08:32:14.927150  876818 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key
	I1217 08:32:14.927179  876818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt with IP's: []
	I1217 08:32:14.999838  876818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt ...
	I1217 08:32:14.999868  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt: {Name:mk1fe736b631b3578e9134ad8e647a4ce10e1dfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:15.000043  876818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key ...
	I1217 08:32:15.000057  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key: {Name:mkff008ec12026d35b6afe310c5ec1f253ee363c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:15.000226  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:32:15.000264  876818 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:32:15.000274  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:32:15.000297  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:32:15.000320  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:32:15.000412  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:32:15.000466  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:15.001158  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:32:15.022100  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:32:15.041157  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:32:15.060148  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:32:15.083276  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 08:32:15.106435  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:32:15.130968  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:32:15.151120  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 08:32:15.171738  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:32:15.196153  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:32:15.216671  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:32:15.237947  876818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:32:15.252432  876818 ssh_runner.go:195] Run: openssl version
	I1217 08:32:15.259775  876818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.268463  876818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:32:15.277889  876818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.282785  876818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.282840  876818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.321178  876818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:15.332457  876818 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5560552.pem /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:15.342212  876818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.350940  876818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:32:15.359554  876818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.363904  876818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.363983  876818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.405120  876818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:32:15.413813  876818 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 08:32:15.422721  876818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.431742  876818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:32:15.440099  876818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.445063  876818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.445150  876818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.485945  876818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:32:15.495194  876818 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/556055.pem /etc/ssl/certs/51391683.0
	I1217 08:32:15.504455  876818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:32:15.509009  876818 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 08:32:15.509078  876818 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:15.509159  876818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:32:15.509212  876818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:32:15.542000  876818 cri.go:89] found id: ""
	I1217 08:32:15.542078  876818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:32:15.551359  876818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 08:32:15.561709  876818 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 08:32:15.561782  876818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 08:32:15.571765  876818 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 08:32:15.571803  876818 kubeadm.go:158] found existing configuration files:
	
	I1217 08:32:15.571859  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1217 08:32:15.582295  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 08:32:15.582353  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 08:32:15.591491  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1217 08:32:15.600423  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 08:32:15.600486  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 08:32:15.609303  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1217 08:32:15.618950  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 08:32:15.619020  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 08:32:15.628515  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1217 08:32:15.637988  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 08:32:15.638046  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 08:32:15.646553  876818 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 08:32:15.711259  876818 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 08:32:15.775493  876818 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 08:32:13.181565  866074 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:32:13.186365  866074 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1217 08:32:13.186392  866074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:32:13.200333  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:32:13.442509  866074 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:32:13.442697  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:13.442819  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-936988 minikube.k8s.io/updated_at=2025_12_17T08_32_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=no-preload-936988 minikube.k8s.io/primary=true
	I1217 08:32:13.461848  866074 ops.go:34] apiserver oom_adj: -16
	I1217 08:32:13.552148  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:14.053185  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:14.552761  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:15.052803  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:15.552460  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:16.052244  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:16.552431  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:17.052268  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:17.552825  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:17.630473  866074 kubeadm.go:1114] duration metric: took 4.187821582s to wait for elevateKubeSystemPrivileges
	I1217 08:32:17.630512  866074 kubeadm.go:403] duration metric: took 13.943055923s to StartCluster
	I1217 08:32:17.630550  866074 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:17.630631  866074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:17.632218  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:17.632577  866074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:17.632602  866074 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:17.632683  866074 addons.go:70] Setting storage-provisioner=true in profile "no-preload-936988"
	I1217 08:32:17.632702  866074 addons.go:239] Setting addon storage-provisioner=true in "no-preload-936988"
	I1217 08:32:17.632731  866074 host.go:66] Checking if "no-preload-936988" exists ...
	I1217 08:32:17.632780  866074 addons.go:70] Setting default-storageclass=true in profile "no-preload-936988"
	I1217 08:32:17.632811  866074 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-936988"
	I1217 08:32:17.633099  866074 cli_runner.go:164] Run: docker container inspect no-preload-936988 --format={{.State.Status}}
	I1217 08:32:17.632569  866074 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:17.633241  866074 cli_runner.go:164] Run: docker container inspect no-preload-936988 --format={{.State.Status}}
	I1217 08:32:17.633548  866074 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:32:17.634963  866074 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:17.640101  866074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:17.668302  866074 addons.go:239] Setting addon default-storageclass=true in "no-preload-936988"
	I1217 08:32:17.668358  866074 host.go:66] Checking if "no-preload-936988" exists ...
	I1217 08:32:17.668491  866074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:17.668875  866074 cli_runner.go:164] Run: docker container inspect no-preload-936988 --format={{.State.Status}}
	I1217 08:32:17.670105  866074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:17.670126  866074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:17.670199  866074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-936988
	I1217 08:32:17.704878  866074 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/no-preload-936988/id_ed25519 Username:docker}
	I1217 08:32:17.708635  866074 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:17.708697  866074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:17.708784  866074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-936988
	I1217 08:32:17.736140  866074 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/no-preload-936988/id_ed25519 Username:docker}
	I1217 08:32:17.758941  866074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:17.809686  866074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1217 08:32:14.586794  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	W1217 08:32:16.587145  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	I1217 08:32:17.830908  866074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:17.863741  866074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:18.003143  866074 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:18.004756  866074 node_ready.go:35] waiting up to 6m0s for node "no-preload-936988" to be "Ready" ...
	I1217 08:32:18.233677  866074 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 08:32:15.797222  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	W1217 08:32:17.797847  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:19.088257  860032 node_ready.go:49] node "old-k8s-version-640910" is "Ready"
	I1217 08:32:19.088298  860032 node_ready.go:38] duration metric: took 15.506235047s for node "old-k8s-version-640910" to be "Ready" ...
	I1217 08:32:19.088315  860032 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:32:19.088364  860032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:32:19.106742  860032 api_server.go:72] duration metric: took 16.064061733s to wait for apiserver process to appear ...
	I1217 08:32:19.106778  860032 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:32:19.106802  860032 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 08:32:19.113046  860032 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 08:32:19.114637  860032 api_server.go:141] control plane version: v1.28.0
	I1217 08:32:19.114666  860032 api_server.go:131] duration metric: took 7.880626ms to wait for apiserver health ...
	I1217 08:32:19.114680  860032 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:32:19.120583  860032 system_pods.go:59] 8 kube-system pods found
	I1217 08:32:19.120638  860032 system_pods.go:61] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:19.120654  860032 system_pods.go:61] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.120662  860032 system_pods.go:61] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.120680  860032 system_pods.go:61] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.120695  860032 system_pods.go:61] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.120700  860032 system_pods.go:61] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.120706  860032 system_pods.go:61] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.120714  860032 system_pods.go:61] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:19.120729  860032 system_pods.go:74] duration metric: took 6.039419ms to wait for pod list to return data ...
	I1217 08:32:19.120746  860032 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:32:19.123936  860032 default_sa.go:45] found service account: "default"
	I1217 08:32:19.123970  860032 default_sa.go:55] duration metric: took 3.215131ms for default service account to be created ...
	I1217 08:32:19.124052  860032 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:32:19.129828  860032 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:19.129937  860032 system_pods.go:89] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:19.129949  860032 system_pods.go:89] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.129960  860032 system_pods.go:89] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.129965  860032 system_pods.go:89] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.129971  860032 system_pods.go:89] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.129976  860032 system_pods.go:89] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.129980  860032 system_pods.go:89] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.129987  860032 system_pods.go:89] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:19.130015  860032 retry.go:31] will retry after 193.985772ms: missing components: kube-dns
	I1217 08:32:19.330692  860032 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:19.330740  860032 system_pods.go:89] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:19.330752  860032 system_pods.go:89] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.330761  860032 system_pods.go:89] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.330767  860032 system_pods.go:89] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.330772  860032 system_pods.go:89] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.330777  860032 system_pods.go:89] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.330780  860032 system_pods.go:89] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.330784  860032 system_pods.go:89] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:19.330808  860032 retry.go:31] will retry after 264.53787ms: missing components: kube-dns
	I1217 08:32:19.602757  860032 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:19.602794  860032 system_pods.go:89] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Running
	I1217 08:32:19.602803  860032 system_pods.go:89] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.602808  860032 system_pods.go:89] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.602813  860032 system_pods.go:89] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.602818  860032 system_pods.go:89] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.602823  860032 system_pods.go:89] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.602828  860032 system_pods.go:89] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.602833  860032 system_pods.go:89] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Running
	I1217 08:32:19.602844  860032 system_pods.go:126] duration metric: took 478.778338ms to wait for k8s-apps to be running ...
	I1217 08:32:19.602855  860032 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:32:19.602919  860032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:32:19.623074  860032 system_svc.go:56] duration metric: took 20.20768ms WaitForService to wait for kubelet
	I1217 08:32:19.623106  860032 kubeadm.go:587] duration metric: took 16.580433192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:19.623129  860032 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:32:19.626994  860032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:32:19.627046  860032 node_conditions.go:123] node cpu capacity is 8
	I1217 08:32:19.627099  860032 node_conditions.go:105] duration metric: took 3.935608ms to run NodePressure ...
	I1217 08:32:19.627120  860032 start.go:242] waiting for startup goroutines ...
	I1217 08:32:19.627130  860032 start.go:247] waiting for cluster config update ...
	I1217 08:32:19.627144  860032 start.go:256] writing updated cluster config ...
	I1217 08:32:19.627664  860032 ssh_runner.go:195] Run: rm -f paused
	I1217 08:32:19.633357  860032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:19.639945  860032 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mr99d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.647463  860032 pod_ready.go:94] pod "coredns-5dd5756b68-mr99d" is "Ready"
	I1217 08:32:19.647499  860032 pod_ready.go:86] duration metric: took 7.52479ms for pod "coredns-5dd5756b68-mr99d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.652349  860032 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.659423  860032 pod_ready.go:94] pod "etcd-old-k8s-version-640910" is "Ready"
	I1217 08:32:19.659461  860032 pod_ready.go:86] duration metric: took 7.072786ms for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.663294  860032 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.669941  860032 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-640910" is "Ready"
	I1217 08:32:19.669976  860032 pod_ready.go:86] duration metric: took 6.648805ms for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.673979  860032 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.042200  860032 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-640910" is "Ready"
	I1217 08:32:20.042232  860032 pod_ready.go:86] duration metric: took 368.226903ms for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.239616  860032 pod_ready.go:83] waiting for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.638112  860032 pod_ready.go:94] pod "kube-proxy-cwfwr" is "Ready"
	I1217 08:32:20.638140  860032 pod_ready.go:86] duration metric: took 398.494834ms for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.840026  860032 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.239099  860032 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-640910" is "Ready"
	I1217 08:32:21.239132  860032 pod_ready.go:86] duration metric: took 399.059167ms for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.239147  860032 pod_ready.go:40] duration metric: took 1.605741174s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:21.285586  860032 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 08:32:21.338149  860032 out.go:203] 
	W1217 08:32:21.340341  860032 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 08:32:21.342018  860032 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 08:32:21.345623  860032 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-640910" cluster and "default" namespace by default
	W1217 08:32:21.349107  860032 root.go:91] failed to log command end to audit: failed to find a log row with id equals to dd79e1a3-c046-43f1-a071-2f0a5a4d6a1b
	I1217 08:32:18.235063  866074 addons.go:530] duration metric: took 602.463723ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:32:18.509742  866074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-936988" context rescaled to 1 replicas
	W1217 08:32:20.008693  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	W1217 08:32:22.509397  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	W1217 08:32:20.297227  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:20.796895  866708 node_ready.go:49] node "embed-certs-581631" is "Ready"
	I1217 08:32:20.796932  866708 node_ready.go:38] duration metric: took 16.503273535s for node "embed-certs-581631" to be "Ready" ...
	I1217 08:32:20.796952  866708 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:32:20.797007  866708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:32:20.811909  866708 api_server.go:72] duration metric: took 16.874314934s to wait for apiserver process to appear ...
	I1217 08:32:20.811944  866708 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:32:20.811970  866708 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:32:20.817838  866708 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 08:32:20.819086  866708 api_server.go:141] control plane version: v1.34.3
	I1217 08:32:20.819118  866708 api_server.go:131] duration metric: took 7.165561ms to wait for apiserver health ...
	I1217 08:32:20.819129  866708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:32:20.823436  866708 system_pods.go:59] 8 kube-system pods found
	I1217 08:32:20.823477  866708 system_pods.go:61] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:20.823491  866708 system_pods.go:61] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:20.823500  866708 system_pods.go:61] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:20.823506  866708 system_pods.go:61] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:20.823512  866708 system_pods.go:61] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:20.823518  866708 system_pods.go:61] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:20.823523  866708 system_pods.go:61] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:20.823540  866708 system_pods.go:61] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:20.823549  866708 system_pods.go:74] duration metric: took 4.412326ms to wait for pod list to return data ...
	I1217 08:32:20.823559  866708 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:32:20.827902  866708 default_sa.go:45] found service account: "default"
	I1217 08:32:20.827931  866708 default_sa.go:55] duration metric: took 4.364348ms for default service account to be created ...
	I1217 08:32:20.827945  866708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:32:20.924443  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:20.924498  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:20.924512  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:20.924573  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:20.924580  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:20.924586  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:20.924592  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:20.924603  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:20.924611  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:20.924654  866708 retry.go:31] will retry after 243.506417ms: missing components: kube-dns
	I1217 08:32:21.172665  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:21.172712  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:21.172718  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:21.172723  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:21.172728  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:21.172732  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:21.172735  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:21.172738  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:21.172743  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:21.172760  866708 retry.go:31] will retry after 326.410198ms: missing components: kube-dns
	I1217 08:32:21.506028  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:21.506083  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:21.506094  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:21.506101  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:21.506107  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:21.506115  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:21.506121  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:21.506126  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:21.506147  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:21.506169  866708 retry.go:31] will retry after 400.365348ms: missing components: kube-dns
	I1217 08:32:21.911225  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:21.911348  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Running
	I1217 08:32:21.911362  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:21.911368  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:21.911373  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:21.911381  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:21.911386  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:21.911392  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:21.911396  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Running
	I1217 08:32:21.911415  866708 system_pods.go:126] duration metric: took 1.083462108s to wait for k8s-apps to be running ...
	I1217 08:32:21.911427  866708 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:32:21.911486  866708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:32:21.926549  866708 system_svc.go:56] duration metric: took 15.103695ms WaitForService to wait for kubelet
	I1217 08:32:21.926585  866708 kubeadm.go:587] duration metric: took 17.988996239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:21.926608  866708 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:32:21.929905  866708 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:32:21.929939  866708 node_conditions.go:123] node cpu capacity is 8
	I1217 08:32:21.929959  866708 node_conditions.go:105] duration metric: took 3.345146ms to run NodePressure ...
	I1217 08:32:21.929987  866708 start.go:242] waiting for startup goroutines ...
	I1217 08:32:21.929998  866708 start.go:247] waiting for cluster config update ...
	I1217 08:32:21.930013  866708 start.go:256] writing updated cluster config ...
	I1217 08:32:21.930341  866708 ssh_runner.go:195] Run: rm -f paused
	I1217 08:32:21.935015  866708 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:21.939503  866708 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p7sqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.944458  866708 pod_ready.go:94] pod "coredns-66bc5c9577-p7sqj" is "Ready"
	I1217 08:32:21.944489  866708 pod_ready.go:86] duration metric: took 4.957519ms for pod "coredns-66bc5c9577-p7sqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.946799  866708 pod_ready.go:83] waiting for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.950990  866708 pod_ready.go:94] pod "etcd-embed-certs-581631" is "Ready"
	I1217 08:32:21.951012  866708 pod_ready.go:86] duration metric: took 4.188719ms for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.952992  866708 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.956931  866708 pod_ready.go:94] pod "kube-apiserver-embed-certs-581631" is "Ready"
	I1217 08:32:21.956954  866708 pod_ready.go:86] duration metric: took 3.940004ms for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.958889  866708 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:22.340125  866708 pod_ready.go:94] pod "kube-controller-manager-embed-certs-581631" is "Ready"
	I1217 08:32:22.340165  866708 pod_ready.go:86] duration metric: took 381.252466ms for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:22.539721  866708 pod_ready.go:83] waiting for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:22.939424  866708 pod_ready.go:94] pod "kube-proxy-7z26t" is "Ready"
	I1217 08:32:22.939452  866708 pod_ready.go:86] duration metric: took 399.692811ms for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:23.140192  866708 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:23.540426  866708 pod_ready.go:94] pod "kube-scheduler-embed-certs-581631" is "Ready"
	I1217 08:32:23.540465  866708 pod_ready.go:86] duration metric: took 400.236944ms for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:23.540484  866708 pod_ready.go:40] duration metric: took 1.60543256s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:23.588350  866708 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:32:23.590603  866708 out.go:179] * Done! kubectl is now configured to use "embed-certs-581631" cluster and "default" namespace by default
	I1217 08:32:26.556175  876818 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 08:32:26.556252  876818 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 08:32:26.556377  876818 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 08:32:26.556450  876818 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 08:32:26.556515  876818 kubeadm.go:319] OS: Linux
	I1217 08:32:26.556622  876818 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 08:32:26.556686  876818 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 08:32:26.556759  876818 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 08:32:26.556827  876818 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 08:32:26.556897  876818 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 08:32:26.556963  876818 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 08:32:26.557031  876818 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 08:32:26.557094  876818 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 08:32:26.557191  876818 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 08:32:26.557272  876818 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 08:32:26.557426  876818 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 08:32:26.557524  876818 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 08:32:26.559101  876818 out.go:252]   - Generating certificates and keys ...
	I1217 08:32:26.559206  876818 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 08:32:26.559306  876818 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 08:32:26.559404  876818 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 08:32:26.559494  876818 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 08:32:26.559588  876818 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 08:32:26.559669  876818 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 08:32:26.559768  876818 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 08:32:26.559896  876818 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-225657 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 08:32:26.559944  876818 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 08:32:26.560063  876818 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-225657 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 08:32:26.560119  876818 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 08:32:26.560192  876818 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 08:32:26.560248  876818 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 08:32:26.560305  876818 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 08:32:26.560354  876818 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 08:32:26.560409  876818 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 08:32:26.560458  876818 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 08:32:26.560520  876818 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 08:32:26.560589  876818 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 08:32:26.560688  876818 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 08:32:26.560784  876818 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 08:32:26.563576  876818 out.go:252]   - Booting up control plane ...
	I1217 08:32:26.563704  876818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 08:32:26.563826  876818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 08:32:26.563906  876818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 08:32:26.564009  876818 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 08:32:26.564152  876818 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 08:32:26.564278  876818 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 08:32:26.564399  876818 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 08:32:26.564441  876818 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 08:32:26.564577  876818 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 08:32:26.564713  876818 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 08:32:26.564805  876818 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001759685s
	I1217 08:32:26.564919  876818 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 08:32:26.565037  876818 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1217 08:32:26.565141  876818 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 08:32:26.565222  876818 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 08:32:26.565289  876818 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.014387076s
	I1217 08:32:26.565342  876818 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.356564571s
	I1217 08:32:26.565447  876818 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00264337s
	I1217 08:32:26.565610  876818 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 08:32:26.565736  876818 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 08:32:26.565800  876818 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 08:32:26.565982  876818 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-225657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 08:32:26.566041  876818 kubeadm.go:319] [bootstrap-token] Using token: 5amo5u.ea0ubedundw2l43g
	I1217 08:32:26.567706  876818 out.go:252]   - Configuring RBAC rules ...
	I1217 08:32:26.567799  876818 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 08:32:26.567870  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 08:32:26.567982  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 08:32:26.568087  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 08:32:26.568181  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 08:32:26.568278  876818 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 08:32:26.568368  876818 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 08:32:26.568412  876818 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 08:32:26.568453  876818 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 08:32:26.568458  876818 kubeadm.go:319] 
	I1217 08:32:26.568525  876818 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 08:32:26.568544  876818 kubeadm.go:319] 
	I1217 08:32:26.568624  876818 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 08:32:26.568631  876818 kubeadm.go:319] 
	I1217 08:32:26.568650  876818 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 08:32:26.568715  876818 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 08:32:26.568761  876818 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 08:32:26.568765  876818 kubeadm.go:319] 
	I1217 08:32:26.568806  876818 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 08:32:26.568811  876818 kubeadm.go:319] 
	I1217 08:32:26.568848  876818 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 08:32:26.568853  876818 kubeadm.go:319] 
	I1217 08:32:26.568944  876818 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 08:32:26.569079  876818 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 08:32:26.569187  876818 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 08:32:26.569197  876818 kubeadm.go:319] 
	I1217 08:32:26.569320  876818 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 08:32:26.569417  876818 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 08:32:26.569424  876818 kubeadm.go:319] 
	I1217 08:32:26.569494  876818 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 5amo5u.ea0ubedundw2l43g \
	I1217 08:32:26.569666  876818 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 08:32:26.569717  876818 kubeadm.go:319] 	--control-plane 
	I1217 08:32:26.569727  876818 kubeadm.go:319] 
	I1217 08:32:26.569859  876818 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 08:32:26.569871  876818 kubeadm.go:319] 
	I1217 08:32:26.569979  876818 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 5amo5u.ea0ubedundw2l43g \
	I1217 08:32:26.570124  876818 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 08:32:26.570137  876818 cni.go:84] Creating CNI manager for ""
	I1217 08:32:26.570148  876818 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:26.572044  876818 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1217 08:32:25.008774  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	W1217 08:32:27.508833  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 17 08:32:19 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:19.082022384Z" level=info msg="Starting container: cb5d42af05d058bdd9a5a64bd3c08993351a6ec90dd41a865b1b09343cb18901" id=55cdb712-48ca-4274-a130-3fbe9eed3717 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:32:19 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:19.084458115Z" level=info msg="Started container" PID=2174 containerID=cb5d42af05d058bdd9a5a64bd3c08993351a6ec90dd41a865b1b09343cb18901 description=kube-system/coredns-5dd5756b68-mr99d/coredns id=55cdb712-48ca-4274-a130-3fbe9eed3717 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dcf671c98bbc9b35aba6007c47c39affef0f561c439b5f2699a2cbba56d04be9
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.82961241Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3e1150d2-62a4-4c07-b5cf-56b7fca600f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.829704145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.835237751Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5e00d58a108d765858cf2780d01b89fa247de979b25b81dc7e2c9bf142d2c883 UID:0f766262-719c-4660-98c5-17e8294dcee3 NetNS:/var/run/netns/b40fb65f-1265-482e-b45b-517b7ff9fdf3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000130e10}] Aliases:map[]}"
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.83526926Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.846561346Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5e00d58a108d765858cf2780d01b89fa247de979b25b81dc7e2c9bf142d2c883 UID:0f766262-719c-4660-98c5-17e8294dcee3 NetNS:/var/run/netns/b40fb65f-1265-482e-b45b-517b7ff9fdf3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000130e10}] Aliases:map[]}"
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.846732518Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.847769534Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.848872711Z" level=info msg="Ran pod sandbox 5e00d58a108d765858cf2780d01b89fa247de979b25b81dc7e2c9bf142d2c883 with infra container: default/busybox/POD" id=3e1150d2-62a4-4c07-b5cf-56b7fca600f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.850735477Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e357220e-5476-42f9-9be1-fffc92e05116 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.850875075Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e357220e-5476-42f9-9be1-fffc92e05116 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.850918955Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e357220e-5476-42f9-9be1-fffc92e05116 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.851663975Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aea34241-8d08-4708-a7a0-3cdaf8914e06 name=/runtime.v1.ImageService/PullImage
	Dec 17 08:32:21 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:21.854492609Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 08:32:24 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:24.20760545Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=aea34241-8d08-4708-a7a0-3cdaf8914e06 name=/runtime.v1.ImageService/PullImage
	Dec 17 08:32:24 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:24.208486329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a941b912-95b7-4080-ae84-6ac988cae266 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:24 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:24.209923178Z" level=info msg="Creating container: default/busybox/busybox" id=67021599-70ec-452f-88c1-b9e032f31631 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:32:24 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:24.210050127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:24 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:24.213929949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:24 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:24.214336473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:24 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:24.241576397Z" level=info msg="Created container 58c5ecc24a5a829c5f08cfb3b0b3ae39355a3a30a3466053cd8ef137e5f5fb2f: default/busybox/busybox" id=67021599-70ec-452f-88c1-b9e032f31631 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:32:24 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:24.242261529Z" level=info msg="Starting container: 58c5ecc24a5a829c5f08cfb3b0b3ae39355a3a30a3466053cd8ef137e5f5fb2f" id=f7774361-1332-4174-b036-6105c94183c5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:32:24 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:24.244618954Z" level=info msg="Started container" PID=2247 containerID=58c5ecc24a5a829c5f08cfb3b0b3ae39355a3a30a3466053cd8ef137e5f5fb2f description=default/busybox/busybox id=f7774361-1332-4174-b036-6105c94183c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e00d58a108d765858cf2780d01b89fa247de979b25b81dc7e2c9bf142d2c883
	Dec 17 08:32:30 old-k8s-version-640910 crio[766]: time="2025-12-17T08:32:30.619239849Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	58c5ecc24a5a8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   5e00d58a108d7       busybox                                          default
	cb5d42af05d05       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   dcf671c98bbc9       coredns-5dd5756b68-mr99d                         kube-system
	93f895229394b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   19c72436e891e       storage-provisioner                              kube-system
	30ac822172189       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   f876b78f069de       kindnet-x9g6n                                    kube-system
	3eafb272f2bae       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      29 seconds ago      Running             kube-proxy                0                   f474ba62cf4be       kube-proxy-cwfwr                                 kube-system
	c9c5a488612e4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      48 seconds ago      Running             etcd                      0                   5fdc7c5631215       etcd-old-k8s-version-640910                      kube-system
	335c1393f43cf       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      48 seconds ago      Running             kube-scheduler            0                   a15788ec4f864       kube-scheduler-old-k8s-version-640910            kube-system
	d943605caae75       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      48 seconds ago      Running             kube-apiserver            0                   b069ee45c6b81       kube-apiserver-old-k8s-version-640910            kube-system
	d9233919ae0f7       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      48 seconds ago      Running             kube-controller-manager   0                   31e1c5ab35f61       kube-controller-manager-old-k8s-version-640910   kube-system
	
	
	==> coredns [cb5d42af05d058bdd9a5a64bd3c08993351a6ec90dd41a865b1b09343cb18901] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55960 - 49719 "HINFO IN 4994495473744482500.4250866610507926647. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032060593s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-640910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-640910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=old-k8s-version-640910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_31_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:31:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-640910
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:32:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:32:20 +0000   Wed, 17 Dec 2025 08:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:32:20 +0000   Wed, 17 Dec 2025 08:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:32:20 +0000   Wed, 17 Dec 2025 08:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:32:20 +0000   Wed, 17 Dec 2025 08:32:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-640910
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                a3280b33-8da6-4c10-b813-cb05f9aa1448
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-mr99d                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-640910                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-x9g6n                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-640910             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-640910    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-cwfwr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-640910             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 43s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s   kubelet          Node old-k8s-version-640910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s   kubelet          Node old-k8s-version-640910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s   kubelet          Node old-k8s-version-640910 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-640910 event: Registered Node old-k8s-version-640910 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-640910 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [c9c5a488612e46ee24edf6567b359b8ef664d27f0ff2ce8adf789e8bb5d18909] <==
	{"level":"info","ts":"2025-12-17T08:31:44.10126Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T08:31:44.101293Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T08:31:44.101687Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T08:31:44.101717Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T08:31:44.286173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-17T08:31:44.286226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-17T08:31:44.286242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-17T08:31:44.286258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-17T08:31:44.286263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-17T08:31:44.286271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-17T08:31:44.286279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-17T08:31:44.287307Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-640910 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T08:31:44.287418Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:31:44.28748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:31:44.287666Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:31:44.287691Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T08:31:44.287706Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T08:31:44.288633Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T08:31:44.288943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-17T08:31:44.288933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T08:31:44.289172Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T08:31:44.289215Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2025-12-17T08:31:54.532088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.837593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-640910\" ","response":"range_response_count:1 size:7458"}
	{"level":"info","ts":"2025-12-17T08:31:54.532196Z","caller":"traceutil/trace.go:171","msg":"trace[190121108] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-old-k8s-version-640910; range_end:; response_count:1; response_revision:252; }","duration":"186.964552ms","start":"2025-12-17T08:31:54.345214Z","end":"2025-12-17T08:31:54.532178Z","steps":["trace[190121108] 'range keys from in-memory index tree'  (duration: 186.716973ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:31:54.667998Z","caller":"traceutil/trace.go:171","msg":"trace[241476281] transaction","detail":"{read_only:false; response_revision:253; number_of_response:1; }","duration":"127.631201ms","start":"2025-12-17T08:31:54.540351Z","end":"2025-12-17T08:31:54.667982Z","steps":["trace[241476281] 'process raft request'  (duration: 127.497656ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:32:32 up  2:14,  0 user,  load average: 5.55, 3.89, 2.70
	Linux old-k8s-version-640910 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [30ac82217218993c3b4cf3f8b25993221a3e6223b99b5c9148bb56252beebe67] <==
	I1217 08:32:08.215656       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:32:08.216062       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 08:32:08.216204       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:32:08.216223       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:32:08.216246       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:32:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:32:08.508248       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:32:08.610607       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:32:08.610658       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:32:08.610835       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:32:08.810971       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:32:08.811263       1 metrics.go:72] Registering metrics
	I1217 08:32:08.812092       1 controller.go:711] "Syncing nftables rules"
	I1217 08:32:18.516100       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:32:18.516166       1 main.go:301] handling current node
	I1217 08:32:28.508995       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:32:28.509046       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d943605caae750e62ec8d2420cab4cdfc75a7a6f332fcd83ad95af204b8b3e37] <==
	I1217 08:31:46.020991       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1217 08:31:46.021019       1 aggregator.go:166] initial CRD sync complete...
	I1217 08:31:46.021140       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 08:31:46.021155       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 08:31:46.021164       1 cache.go:39] Caches are synced for autoregister controller
	I1217 08:31:46.021806       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1217 08:31:46.022314       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1217 08:31:46.023381       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 08:31:46.030869       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1217 08:31:46.047625       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:31:46.927291       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 08:31:46.931576       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 08:31:46.931597       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:31:47.637056       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:31:47.693426       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:31:47.844444       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 08:31:47.851714       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1217 08:31:47.853117       1 controller.go:624] quota admission added evaluator for: endpoints
	I1217 08:31:47.859253       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:31:48.013678       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 08:31:49.205176       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 08:31:49.219824       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 08:31:49.236299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1217 08:32:02.636953       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1217 08:32:02.786283       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d9233919ae0f7cfb47f261bbb10a4e5246b1af87a2a8ce7f2031077657ccebc6] <==
	I1217 08:32:02.620983       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 08:32:02.646788       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1217 08:32:02.680943       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 08:32:02.681068       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 08:32:02.797466       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cwfwr"
	I1217 08:32:02.802058       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-x9g6n"
	I1217 08:32:03.113725       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fctj2"
	I1217 08:32:03.145827       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mr99d"
	I1217 08:32:03.159508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="512.231607ms"
	I1217 08:32:03.176079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.362601ms"
	I1217 08:32:03.177139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.247µs"
	I1217 08:32:03.189579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="169.17µs"
	I1217 08:32:03.613595       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1217 08:32:03.637378       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-fctj2"
	I1217 08:32:03.653192       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.23623ms"
	I1217 08:32:03.664461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.083202ms"
	I1217 08:32:03.664994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="206.722µs"
	I1217 08:32:18.730507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.785µs"
	I1217 08:32:18.750430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="157.474µs"
	I1217 08:32:19.464463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.329424ms"
	I1217 08:32:19.464656       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="141.893µs"
	I1217 08:32:22.283105       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1217 08:32:22.283390       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-mr99d" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-mr99d"
	I1217 08:32:22.283412       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1217 08:32:22.283424       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox"
	
	
	==> kube-proxy [3eafb272f2bae14adacd65caddbf96e1bc23f85310d8e36a785f67593b837ae6] <==
	I1217 08:32:03.418631       1 server_others.go:69] "Using iptables proxy"
	I1217 08:32:03.439897       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1217 08:32:03.486811       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:32:03.490631       1 server_others.go:152] "Using iptables Proxier"
	I1217 08:32:03.490689       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 08:32:03.490697       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 08:32:03.490737       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 08:32:03.491045       1 server.go:846] "Version info" version="v1.28.0"
	I1217 08:32:03.491756       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:32:03.493375       1 config.go:188] "Starting service config controller"
	I1217 08:32:03.493661       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 08:32:03.493748       1 config.go:97] "Starting endpoint slice config controller"
	I1217 08:32:03.494060       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 08:32:03.499808       1 config.go:315] "Starting node config controller"
	I1217 08:32:03.499871       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 08:32:03.594451       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 08:32:03.595638       1 shared_informer.go:318] Caches are synced for service config
	I1217 08:32:03.599950       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [335c1393f43cf5b27d279d6d1bae26a9accf4dc86e0edcea4375ddf3950e543b] <==
	W1217 08:31:46.877245       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1217 08:31:46.878910       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:31:46.882586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1217 08:31:46.883282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1217 08:31:46.909832       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1217 08:31:46.909909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1217 08:31:46.980673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1217 08:31:46.980711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1217 08:31:46.983223       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1217 08:31:46.983266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1217 08:31:47.038207       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1217 08:31:47.038246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1217 08:31:47.174318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1217 08:31:47.174362       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1217 08:31:47.223699       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1217 08:31:47.223816       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1217 08:31:47.279273       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1217 08:31:47.279473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1217 08:31:47.284949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1217 08:31:47.285248       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1217 08:31:47.304744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1217 08:31:47.304814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1217 08:31:47.390894       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1217 08:31:47.391210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1217 08:31:49.330378       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.157252    1410 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.805257    1410 topology_manager.go:215] "Topology Admit Handler" podUID="e0ce0d47-e184-464c-8ec0-4907f3ab9b41" podNamespace="kube-system" podName="kube-proxy-cwfwr"
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.810491    1410 topology_manager.go:215] "Topology Admit Handler" podUID="59d4e46e-e40e-41fe-af7d-613f48f08315" podNamespace="kube-system" podName="kindnet-x9g6n"
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.876967    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0ce0d47-e184-464c-8ec0-4907f3ab9b41-xtables-lock\") pod \"kube-proxy-cwfwr\" (UID: \"e0ce0d47-e184-464c-8ec0-4907f3ab9b41\") " pod="kube-system/kube-proxy-cwfwr"
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.877037    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp6gv\" (UniqueName: \"kubernetes.io/projected/e0ce0d47-e184-464c-8ec0-4907f3ab9b41-kube-api-access-zp6gv\") pod \"kube-proxy-cwfwr\" (UID: \"e0ce0d47-e184-464c-8ec0-4907f3ab9b41\") " pod="kube-system/kube-proxy-cwfwr"
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.877076    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59d4e46e-e40e-41fe-af7d-613f48f08315-cni-cfg\") pod \"kindnet-x9g6n\" (UID: \"59d4e46e-e40e-41fe-af7d-613f48f08315\") " pod="kube-system/kindnet-x9g6n"
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.877111    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59d4e46e-e40e-41fe-af7d-613f48f08315-lib-modules\") pod \"kindnet-x9g6n\" (UID: \"59d4e46e-e40e-41fe-af7d-613f48f08315\") " pod="kube-system/kindnet-x9g6n"
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.877158    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0ce0d47-e184-464c-8ec0-4907f3ab9b41-kube-proxy\") pod \"kube-proxy-cwfwr\" (UID: \"e0ce0d47-e184-464c-8ec0-4907f3ab9b41\") " pod="kube-system/kube-proxy-cwfwr"
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.877193    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0ce0d47-e184-464c-8ec0-4907f3ab9b41-lib-modules\") pod \"kube-proxy-cwfwr\" (UID: \"e0ce0d47-e184-464c-8ec0-4907f3ab9b41\") " pod="kube-system/kube-proxy-cwfwr"
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.877226    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd4ql\" (UniqueName: \"kubernetes.io/projected/59d4e46e-e40e-41fe-af7d-613f48f08315-kube-api-access-sd4ql\") pod \"kindnet-x9g6n\" (UID: \"59d4e46e-e40e-41fe-af7d-613f48f08315\") " pod="kube-system/kindnet-x9g6n"
	Dec 17 08:32:02 old-k8s-version-640910 kubelet[1410]: I1217 08:32:02.877259    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59d4e46e-e40e-41fe-af7d-613f48f08315-xtables-lock\") pod \"kindnet-x9g6n\" (UID: \"59d4e46e-e40e-41fe-af7d-613f48f08315\") " pod="kube-system/kindnet-x9g6n"
	Dec 17 08:32:03 old-k8s-version-640910 kubelet[1410]: I1217 08:32:03.399043    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cwfwr" podStartSLOduration=1.398983724 podCreationTimestamp="2025-12-17 08:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:03.394324302 +0000 UTC m=+14.223306227" watchObservedRunningTime="2025-12-17 08:32:03.398983724 +0000 UTC m=+14.227965639"
	Dec 17 08:32:08 old-k8s-version-640910 kubelet[1410]: I1217 08:32:08.399512    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-x9g6n" podStartSLOduration=1.645198668 podCreationTimestamp="2025-12-17 08:32:02 +0000 UTC" firstStartedPulling="2025-12-17 08:32:03.139463965 +0000 UTC m=+13.968445862" lastFinishedPulling="2025-12-17 08:32:07.893060354 +0000 UTC m=+18.722042262" observedRunningTime="2025-12-17 08:32:08.398370288 +0000 UTC m=+19.227352202" watchObservedRunningTime="2025-12-17 08:32:08.398795068 +0000 UTC m=+19.227776981"
	Dec 17 08:32:18 old-k8s-version-640910 kubelet[1410]: I1217 08:32:18.699659    1410 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 17 08:32:18 old-k8s-version-640910 kubelet[1410]: I1217 08:32:18.728292    1410 topology_manager.go:215] "Topology Admit Handler" podUID="5aaae8c7-6580-4b9a-8d54-442a96236756" podNamespace="kube-system" podName="storage-provisioner"
	Dec 17 08:32:18 old-k8s-version-640910 kubelet[1410]: I1217 08:32:18.729801    1410 topology_manager.go:215] "Topology Admit Handler" podUID="14d0e140-912f-42bd-a799-4db74ca65844" podNamespace="kube-system" podName="coredns-5dd5756b68-mr99d"
	Dec 17 08:32:18 old-k8s-version-640910 kubelet[1410]: I1217 08:32:18.795566    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7nq\" (UniqueName: \"kubernetes.io/projected/5aaae8c7-6580-4b9a-8d54-442a96236756-kube-api-access-kx7nq\") pod \"storage-provisioner\" (UID: \"5aaae8c7-6580-4b9a-8d54-442a96236756\") " pod="kube-system/storage-provisioner"
	Dec 17 08:32:18 old-k8s-version-640910 kubelet[1410]: I1217 08:32:18.795643    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gxqs\" (UniqueName: \"kubernetes.io/projected/14d0e140-912f-42bd-a799-4db74ca65844-kube-api-access-6gxqs\") pod \"coredns-5dd5756b68-mr99d\" (UID: \"14d0e140-912f-42bd-a799-4db74ca65844\") " pod="kube-system/coredns-5dd5756b68-mr99d"
	Dec 17 08:32:18 old-k8s-version-640910 kubelet[1410]: I1217 08:32:18.795677    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14d0e140-912f-42bd-a799-4db74ca65844-config-volume\") pod \"coredns-5dd5756b68-mr99d\" (UID: \"14d0e140-912f-42bd-a799-4db74ca65844\") " pod="kube-system/coredns-5dd5756b68-mr99d"
	Dec 17 08:32:18 old-k8s-version-640910 kubelet[1410]: I1217 08:32:18.795697    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5aaae8c7-6580-4b9a-8d54-442a96236756-tmp\") pod \"storage-provisioner\" (UID: \"5aaae8c7-6580-4b9a-8d54-442a96236756\") " pod="kube-system/storage-provisioner"
	Dec 17 08:32:19 old-k8s-version-640910 kubelet[1410]: I1217 08:32:19.428585    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.428506955 podCreationTimestamp="2025-12-17 08:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:19.42844683 +0000 UTC m=+30.257428756" watchObservedRunningTime="2025-12-17 08:32:19.428506955 +0000 UTC m=+30.257488868"
	Dec 17 08:32:21 old-k8s-version-640910 kubelet[1410]: I1217 08:32:21.527373    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mr99d" podStartSLOduration=18.527274334 podCreationTimestamp="2025-12-17 08:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:19.447898879 +0000 UTC m=+30.276880792" watchObservedRunningTime="2025-12-17 08:32:21.527274334 +0000 UTC m=+32.356256575"
	Dec 17 08:32:21 old-k8s-version-640910 kubelet[1410]: I1217 08:32:21.527857    1410 topology_manager.go:215] "Topology Admit Handler" podUID="0f766262-719c-4660-98c5-17e8294dcee3" podNamespace="default" podName="busybox"
	Dec 17 08:32:21 old-k8s-version-640910 kubelet[1410]: I1217 08:32:21.615328    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btswt\" (UniqueName: \"kubernetes.io/projected/0f766262-719c-4660-98c5-17e8294dcee3-kube-api-access-btswt\") pod \"busybox\" (UID: \"0f766262-719c-4660-98c5-17e8294dcee3\") " pod="default/busybox"
	Dec 17 08:32:24 old-k8s-version-640910 kubelet[1410]: I1217 08:32:24.434785    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.078022393 podCreationTimestamp="2025-12-17 08:32:21 +0000 UTC" firstStartedPulling="2025-12-17 08:32:21.851227686 +0000 UTC m=+32.680209591" lastFinishedPulling="2025-12-17 08:32:24.207929193 +0000 UTC m=+35.036911089" observedRunningTime="2025-12-17 08:32:24.434406267 +0000 UTC m=+35.263388181" watchObservedRunningTime="2025-12-17 08:32:24.434723891 +0000 UTC m=+35.263705805"
	
	
	==> storage-provisioner [93f895229394b81694f19a614b9ae193836d03f071dce2fad9fd0bbe74fa2e89] <==
	I1217 08:32:19.090470       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:32:19.103242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:32:19.103313       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 08:32:19.116861       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:32:19.117063       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6840e7cf-d238-43b9-83af-eb3cc68a82f2", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-640910_f7cf6dcf-c62e-47c2-bdd8-6ed1353d12b0 became leader
	I1217 08:32:19.117245       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-640910_f7cf6dcf-c62e-47c2-bdd8-6ed1353d12b0!
	I1217 08:32:19.218434       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-640910_f7cf6dcf-c62e-47c2-bdd8-6ed1353d12b0!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-640910 -n old-k8s-version-640910
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-640910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-581631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-581631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (333.96037ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-581631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-581631 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-581631 describe deploy/metrics-server -n kube-system: exit status 1 (68.501363ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-581631 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-581631
helpers_test.go:244: (dbg) docker inspect embed-certs-581631:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3",
	        "Created": "2025-12-17T08:31:39.009229822Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 868954,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:31:39.052728431Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/hosts",
	        "LogPath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3-json.log",
	        "Name": "/embed-certs-581631",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-581631:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-581631",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3",
	                "LowerDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-581631",
	                "Source": "/var/lib/docker/volumes/embed-certs-581631/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-581631",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-581631",
	                "name.minikube.sigs.k8s.io": "embed-certs-581631",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "488ec7d584044abd17589831b03326b09ce0281dc7dcd60bad3e0e3c32e43b33",
	            "SandboxKey": "/var/run/docker/netns/488ec7d58404",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-581631": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1180462b720da0ae1fa73d0b014c57b2b6955441a1e7b7b4a2e5db28ef5abec",
	                    "EndpointID": "d7332d4a92243afc90e7c201b36705141ed16eeccdebb47c081ee099645d466e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "82:f7:58:36:9d:18",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-581631",
	                        "ce9b768a5250"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-581631 -n embed-certs-581631
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-581631 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-581631 logs -n 25: (1.384718134s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-055130 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo docker system info                                                                                                                                 │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cri-dockerd --version                                                                                                                              │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo containerd config dump                                                                                                                             │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo crio config                                                                                                                                        │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ delete  │ -p bridge-055130                                                                                                                                                         │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-606497                                                                                                                                          │ disable-driver-mounts-606497 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-581631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:32:01
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:32:01.552734  876818 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:32:01.553099  876818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:32:01.553114  876818 out.go:374] Setting ErrFile to fd 2...
	I1217 08:32:01.553121  876818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:32:01.553340  876818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:32:01.553902  876818 out.go:368] Setting JSON to false
	I1217 08:32:01.555210  876818 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8067,"bootTime":1765952255,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:32:01.555284  876818 start.go:143] virtualization: kvm guest
	I1217 08:32:01.558242  876818 out.go:179] * [default-k8s-diff-port-225657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:32:01.561313  876818 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:32:01.561325  876818 notify.go:221] Checking for updates...
	I1217 08:32:01.568510  876818 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:32:01.571884  876818 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:01.574245  876818 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:32:01.576734  876818 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:32:01.578873  876818 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:32:01.581914  876818 config.go:182] Loaded profile config "embed-certs-581631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:01.582052  876818 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:32:01.582137  876818 config.go:182] Loaded profile config "old-k8s-version-640910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 08:32:01.582248  876818 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:32:01.612172  876818 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:32:01.612311  876818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:32:01.684785  876818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-17 08:32:01.672949118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:32:01.684957  876818 docker.go:319] overlay module found
	I1217 08:32:01.687104  876818 out.go:179] * Using the docker driver based on user configuration
	I1217 08:32:01.688739  876818 start.go:309] selected driver: docker
	I1217 08:32:01.688762  876818 start.go:927] validating driver "docker" against <nil>
	I1217 08:32:01.688779  876818 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:32:01.689371  876818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:32:01.761694  876818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-17 08:32:01.749436813 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:32:01.761852  876818 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 08:32:01.762082  876818 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:01.764368  876818 out.go:179] * Using Docker driver with root privileges
	I1217 08:32:01.766035  876818 cni.go:84] Creating CNI manager for ""
	I1217 08:32:01.766129  876818 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:01.766145  876818 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 08:32:01.766238  876818 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:01.768170  876818 out.go:179] * Starting "default-k8s-diff-port-225657" primary control-plane node in "default-k8s-diff-port-225657" cluster
	I1217 08:32:01.769863  876818 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:32:01.772343  876818 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:32:01.774131  876818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:32:01.774188  876818 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 08:32:01.774206  876818 cache.go:65] Caching tarball of preloaded images
	I1217 08:32:01.774253  876818 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:32:01.774340  876818 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:32:01.774359  876818 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 08:32:01.774581  876818 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:32:01.774623  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json: {Name:mkdc1e498a413d8c47a4c9161b8ddc9e11834a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:01.803235  876818 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:32:01.803269  876818 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:32:01.803295  876818 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:32:01.803341  876818 start.go:360] acquireMachinesLock for default-k8s-diff-port-225657: {Name:mkf524609fef75b896bc809c6c5673b68f778ced Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:32:01.803497  876818 start.go:364] duration metric: took 133.382µs to acquireMachinesLock for "default-k8s-diff-port-225657"
	I1217 08:32:01.803569  876818 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:01.803675  876818 start.go:125] createHost starting for "" (driver="docker")
	I1217 08:31:59.471510  866074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:31:59.487104  866074 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1217 08:31:59.492193  866074 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1217 08:31:59.492241  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (58110244 bytes)
	I1217 08:32:01.990912  866074 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1217 08:32:02.003508  866074 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1217 08:32:02.003588  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (72368312 bytes)
	I1217 08:32:02.288548  866074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:32:02.298803  866074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 08:32:02.315378  866074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 08:32:02.402911  866074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 08:32:02.421212  866074 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:32:02.426364  866074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:32:02.442236  866074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:02.553459  866074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:02.590063  866074 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988 for IP: 192.168.94.2
	I1217 08:32:02.590092  866074 certs.go:195] generating shared ca certs ...
	I1217 08:32:02.590113  866074 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.590330  866074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:32:02.590413  866074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:32:02.590429  866074 certs.go:257] generating profile certs ...
	I1217 08:32:02.590514  866074 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.key
	I1217 08:32:02.590544  866074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.crt with IP's: []
	I1217 08:32:02.636814  866074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.crt ...
	I1217 08:32:02.636860  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.crt: {Name:mkc8d6c44408b047376e6be421e3c93768af7dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.637104  866074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.key ...
	I1217 08:32:02.637126  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.key: {Name:mk23aabb5dd35dc4380024377e6eece268d19273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.637255  866074 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be
	I1217 08:32:02.637279  866074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 08:31:57.930133  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:58.430261  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:58.930566  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:59.429668  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:59.929814  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:00.430337  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:00.930517  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.430253  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.929494  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.430181  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.930157  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.041132  860032 kubeadm.go:1114] duration metric: took 12.777197998s to wait for elevateKubeSystemPrivileges
	I1217 08:32:03.041172  860032 kubeadm.go:403] duration metric: took 25.06139908s to StartCluster
	I1217 08:32:03.041194  860032 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.041275  860032 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:03.042238  860032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.042571  860032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:03.042571  860032 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:03.042772  860032 config.go:182] Loaded profile config "old-k8s-version-640910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 08:32:03.042598  860032 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:03.042829  860032 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-640910"
	I1217 08:32:03.042846  860032 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-640910"
	I1217 08:32:03.042873  860032 host.go:66] Checking if "old-k8s-version-640910" exists ...
	I1217 08:32:03.043189  860032 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-640910"
	I1217 08:32:03.043227  860032 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-640910"
	I1217 08:32:03.043387  860032 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:32:03.043604  860032 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:32:03.044941  860032 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:03.047619  860032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:03.077628  860032 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:03.079571  860032 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:03.079600  860032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:03.079664  860032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:03.079881  860032 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-640910"
	I1217 08:32:03.079930  860032 host.go:66] Checking if "old-k8s-version-640910" exists ...
	I1217 08:32:03.080421  860032 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:32:03.115572  860032 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:03.115595  860032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:03.115604  860032 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:03.115657  860032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:03.149311  860032 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:03.198402  860032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:03.247949  860032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:03.263689  860032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:03.280464  860032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:03.580999  860032 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:03.582028  860032 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-640910" to be "Ready" ...
	I1217 08:32:03.834067  860032 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 08:32:00.400233  866708 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:32:00.406292  866708 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 08:32:00.406321  866708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:32:00.424039  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:32:00.743784  866708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:32:00.743917  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:00.743934  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-581631 minikube.k8s.io/updated_at=2025_12_17T08_32_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=embed-certs-581631 minikube.k8s.io/primary=true
	I1217 08:32:00.845521  866708 ops.go:34] apiserver oom_adj: -16
	I1217 08:32:00.845595  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.345810  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.845712  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.345788  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.846718  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.345718  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.845894  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.935808  866708 kubeadm.go:1114] duration metric: took 3.191972569s to wait for elevateKubeSystemPrivileges
	I1217 08:32:03.935854  866708 kubeadm.go:403] duration metric: took 16.523773394s to StartCluster
	I1217 08:32:03.935872  866708 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.935942  866708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:03.937291  866708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.937548  866708 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:03.937670  866708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:03.937680  866708 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:03.937783  866708 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-581631"
	I1217 08:32:03.937801  866708 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-581631"
	I1217 08:32:03.937806  866708 addons.go:70] Setting default-storageclass=true in profile "embed-certs-581631"
	I1217 08:32:03.937828  866708 config.go:182] Loaded profile config "embed-certs-581631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:03.937836  866708 host.go:66] Checking if "embed-certs-581631" exists ...
	I1217 08:32:03.937842  866708 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-581631"
	I1217 08:32:03.938130  866708 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:03.938357  866708 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:03.941811  866708 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:03.943970  866708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:03.964732  866708 addons.go:239] Setting addon default-storageclass=true in "embed-certs-581631"
	I1217 08:32:03.964785  866708 host.go:66] Checking if "embed-certs-581631" exists ...
	I1217 08:32:03.965299  866708 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:03.969098  866708 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:03.970610  866708 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:03.970635  866708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:03.970704  866708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:03.995425  866708 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:03.995462  866708 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:03.995547  866708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:04.006698  866708 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:32:04.031285  866708 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:32:04.065134  866708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:02.824596  866074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be ...
	I1217 08:32:02.824631  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be: {Name:mk45976aa0955a0afc1e8d64278dff519aaa2454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.824859  866074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be ...
	I1217 08:32:02.824886  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be: {Name:mk2dae5a961985112e8e9209c523ebf3ce607cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.825034  866074 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt
	I1217 08:32:02.825138  866074 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key
	I1217 08:32:02.825220  866074 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key
	I1217 08:32:02.825243  866074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt with IP's: []
	I1217 08:32:02.924760  866074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt ...
	I1217 08:32:02.924794  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt: {Name:mk267cedf76a400096972e8a1d55b0ea70195e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.925012  866074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key ...
	I1217 08:32:02.925034  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key: {Name:mkcad6ea1b15d8213d3a172ca1538446ff01dcfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.925290  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:32:02.925355  866074 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:32:02.925366  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:32:02.925400  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:32:02.925435  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:32:02.925467  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:32:02.925552  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:02.926601  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:32:02.955081  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:32:02.999049  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:32:03.023623  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:32:03.048610  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 08:32:03.080188  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:32:03.113501  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:32:03.144381  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:32:03.177658  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:32:03.213764  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:32:03.241889  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:32:03.273169  866074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:32:03.300818  866074 ssh_runner.go:195] Run: openssl version
	I1217 08:32:03.311109  866074 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.324197  866074 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:32:03.346933  866074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.353468  866074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.353573  866074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.417221  866074 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:03.429614  866074 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5560552.pem /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:03.442003  866074 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.455221  866074 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:32:03.469042  866074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.477276  866074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.477361  866074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.534695  866074 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:32:03.547563  866074 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 08:32:03.557640  866074 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.571257  866074 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:32:03.585135  866074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.591025  866074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.591099  866074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.646989  866074 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:32:03.662879  866074 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/556055.pem /etc/ssl/certs/51391683.0
	I1217 08:32:03.679566  866074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:32:03.687395  866074 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 08:32:03.687462  866074 kubeadm.go:401] StartCluster: {Name:no-preload-936988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-936988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:03.687578  866074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:32:03.687637  866074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:32:03.727391  866074 cri.go:89] found id: ""
	I1217 08:32:03.727501  866074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:32:03.738224  866074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 08:32:03.748723  866074 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 08:32:03.748793  866074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 08:32:03.760841  866074 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 08:32:03.760866  866074 kubeadm.go:158] found existing configuration files:
	
	I1217 08:32:03.760920  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 08:32:03.772427  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 08:32:03.772500  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 08:32:03.783020  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 08:32:03.794743  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 08:32:03.794817  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 08:32:03.805322  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 08:32:03.817490  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 08:32:03.817564  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 08:32:03.831785  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 08:32:03.843542  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 08:32:03.843616  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 08:32:03.853047  866074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 08:32:03.899091  866074 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 08:32:03.899195  866074 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 08:32:04.004708  866074 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 08:32:04.004803  866074 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 08:32:04.004848  866074 kubeadm.go:319] OS: Linux
	I1217 08:32:04.004909  866074 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 08:32:04.004973  866074 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 08:32:04.005038  866074 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 08:32:04.006028  866074 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 08:32:04.006112  866074 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 08:32:04.006175  866074 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 08:32:04.006240  866074 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 08:32:04.006308  866074 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 08:32:04.119188  866074 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 08:32:04.119332  866074 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 08:32:04.119474  866074 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 08:32:04.144669  866074 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 08:32:04.132626  866708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:04.136786  866708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:04.152680  866708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:04.291998  866708 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:04.293620  866708 node_ready.go:35] waiting up to 6m0s for node "embed-certs-581631" to be "Ready" ...
	I1217 08:32:04.514479  866708 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 08:32:04.149161  866074 out.go:252]   - Generating certificates and keys ...
	I1217 08:32:04.149271  866074 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 08:32:04.149357  866074 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 08:32:04.345146  866074 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 08:32:04.456420  866074 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 08:32:04.569867  866074 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 08:32:04.769981  866074 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 08:32:04.962017  866074 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 08:32:04.962211  866074 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-936988] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 08:32:05.189992  866074 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 08:32:05.190862  866074 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-936988] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 08:32:05.314135  866074 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 08:32:05.436298  866074 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 08:32:05.639248  866074 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 08:32:05.639451  866074 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 08:32:05.799909  866074 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 08:32:05.903137  866074 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 08:32:06.294633  866074 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 08:32:06.421349  866074 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 08:32:06.498721  866074 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 08:32:06.499367  866074 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 08:32:06.544114  866074 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 08:32:01.806337  876818 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 08:32:01.806789  876818 start.go:159] libmachine.API.Create for "default-k8s-diff-port-225657" (driver="docker")
	I1217 08:32:01.806841  876818 client.go:173] LocalClient.Create starting
	I1217 08:32:01.806928  876818 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 08:32:01.806973  876818 main.go:143] libmachine: Decoding PEM data...
	I1217 08:32:01.807004  876818 main.go:143] libmachine: Parsing certificate...
	I1217 08:32:01.807100  876818 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 08:32:01.807134  876818 main.go:143] libmachine: Decoding PEM data...
	I1217 08:32:01.807156  876818 main.go:143] libmachine: Parsing certificate...
	I1217 08:32:01.807598  876818 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 08:32:01.828194  876818 cli_runner.go:211] docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 08:32:01.828308  876818 network_create.go:284] running [docker network inspect default-k8s-diff-port-225657] to gather additional debugging logs...
	I1217 08:32:01.828345  876818 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657
	W1217 08:32:01.849757  876818 cli_runner.go:211] docker network inspect default-k8s-diff-port-225657 returned with exit code 1
	I1217 08:32:01.849798  876818 network_create.go:287] error running [docker network inspect default-k8s-diff-port-225657]: docker network inspect default-k8s-diff-port-225657: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-225657 not found
	I1217 08:32:01.849822  876818 network_create.go:289] output of [docker network inspect default-k8s-diff-port-225657]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-225657 not found
	
	** /stderr **
	I1217 08:32:01.849945  876818 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:32:01.874361  876818 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-971513c2879b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:b9:48:a1:bc:14} reservation:<nil>}
	I1217 08:32:01.875036  876818 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d3a8438f2b04 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:22:9a:90:c8:31} reservation:<nil>}
	I1217 08:32:01.875878  876818 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-270f10fabfc5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:f8:c6:e8:84:c2} reservation:<nil>}
	I1217 08:32:01.876831  876818 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e1180462b720 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c6:c6:ea:2d:3c:aa} reservation:<nil>}
	I1217 08:32:01.877453  876818 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b355f632d1e4 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:2c:e1:34:c1:34} reservation:<nil>}
	I1217 08:32:01.878105  876818 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-31552e72b7c3 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:c2:be:20:58:f7:57} reservation:<nil>}
	I1217 08:32:01.879300  876818 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020fbec0}
	I1217 08:32:01.879341  876818 network_create.go:124] attempt to create docker network default-k8s-diff-port-225657 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1217 08:32:01.879423  876818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 default-k8s-diff-port-225657
	I1217 08:32:01.960561  876818 network_create.go:108] docker network default-k8s-diff-port-225657 192.168.103.0/24 created
	I1217 08:32:01.960599  876818 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-225657" container
	I1217 08:32:01.960690  876818 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 08:32:01.985847  876818 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-225657 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --label created_by.minikube.sigs.k8s.io=true
	I1217 08:32:02.021946  876818 oci.go:103] Successfully created a docker volume default-k8s-diff-port-225657
	I1217 08:32:02.022045  876818 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-225657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --entrypoint /usr/bin/test -v default-k8s-diff-port-225657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 08:32:02.718937  876818 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-225657
	I1217 08:32:02.719022  876818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:32:02.719035  876818 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 08:32:02.719125  876818 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-225657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 08:32:06.616828  866074 out.go:252]   - Booting up control plane ...
	I1217 08:32:06.617019  866074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 08:32:06.617189  866074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 08:32:06.617313  866074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 08:32:06.617525  866074 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 08:32:06.617836  866074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 08:32:06.618011  866074 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 08:32:06.618170  866074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 08:32:06.618229  866074 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 08:32:06.755893  866074 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 08:32:06.756060  866074 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 08:32:07.756781  866074 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001037016s
	I1217 08:32:07.760245  866074 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 08:32:07.760395  866074 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1217 08:32:07.760553  866074 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 08:32:07.760691  866074 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 08:32:03.835993  860032 addons.go:530] duration metric: took 793.383913ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:32:04.092564  860032 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-640910" context rescaled to 1 replicas
	W1217 08:32:05.590407  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	I1217 08:32:04.516501  866708 addons.go:530] duration metric: took 578.822881ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:32:04.798269  866708 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-581631" context rescaled to 1 replicas
	W1217 08:32:06.297588  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	W1217 08:32:08.297713  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:07.917031  876818 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-225657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (5.197831615s)
	I1217 08:32:07.917065  876818 kic.go:203] duration metric: took 5.198025236s to extract preloaded images to volume ...
	W1217 08:32:07.917162  876818 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 08:32:07.917207  876818 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 08:32:07.917258  876818 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 08:32:07.988840  876818 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-225657 --name default-k8s-diff-port-225657 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --network default-k8s-diff-port-225657 --ip 192.168.103.2 --volume default-k8s-diff-port-225657:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 08:32:08.400242  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Running}}
	I1217 08:32:08.424855  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:08.450896  876818 cli_runner.go:164] Run: docker exec default-k8s-diff-port-225657 stat /var/lib/dpkg/alternatives/iptables
	I1217 08:32:08.523021  876818 oci.go:144] the created container "default-k8s-diff-port-225657" has a running status.
	I1217 08:32:08.523088  876818 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519...
	I1217 08:32:08.525770  876818 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 08:32:08.560942  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:08.585171  876818 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 08:32:08.585195  876818 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-225657 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 08:32:08.651792  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:08.677101  876818 machine.go:94] provisionDockerMachine start ...
	I1217 08:32:08.677481  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:08.707459  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:08.707676  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:08.707703  876818 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:32:08.708734  876818 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57932->127.0.0.1:33510: read: connection reset by peer
	I1217 08:32:08.765985  866074 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005397586s
	I1217 08:32:09.803829  866074 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.0435496s
	I1217 08:32:11.763049  866074 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002842384s
	I1217 08:32:11.782622  866074 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 08:32:11.796483  866074 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 08:32:11.808857  866074 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 08:32:11.809099  866074 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-936988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 08:32:11.818163  866074 kubeadm.go:319] [bootstrap-token] Using token: 7nqi1p.ejost2d3dqegwn4g
	I1217 08:32:11.819926  866074 out.go:252]   - Configuring RBAC rules ...
	I1217 08:32:11.820101  866074 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 08:32:11.823946  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 08:32:11.832263  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 08:32:11.835817  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 08:32:11.838856  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 08:32:11.842484  866074 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 08:32:12.169848  866074 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 08:32:12.589615  866074 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	W1217 08:32:08.088510  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	W1217 08:32:10.585565  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	W1217 08:32:12.585741  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	I1217 08:32:13.169675  866074 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 08:32:13.171934  866074 kubeadm.go:319] 
	I1217 08:32:13.172034  866074 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 08:32:13.172045  866074 kubeadm.go:319] 
	I1217 08:32:13.172161  866074 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 08:32:13.172172  866074 kubeadm.go:319] 
	I1217 08:32:13.172200  866074 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 08:32:13.172277  866074 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 08:32:13.172344  866074 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 08:32:13.172355  866074 kubeadm.go:319] 
	I1217 08:32:13.172415  866074 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 08:32:13.172425  866074 kubeadm.go:319] 
	I1217 08:32:13.172481  866074 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 08:32:13.172494  866074 kubeadm.go:319] 
	I1217 08:32:13.172584  866074 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 08:32:13.172726  866074 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 08:32:13.172821  866074 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 08:32:13.172831  866074 kubeadm.go:319] 
	I1217 08:32:13.172934  866074 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 08:32:13.173027  866074 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 08:32:13.173036  866074 kubeadm.go:319] 
	I1217 08:32:13.173135  866074 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7nqi1p.ejost2d3dqegwn4g \
	I1217 08:32:13.173265  866074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 08:32:13.173294  866074 kubeadm.go:319] 	--control-plane 
	I1217 08:32:13.173303  866074 kubeadm.go:319] 
	I1217 08:32:13.173408  866074 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 08:32:13.173418  866074 kubeadm.go:319] 
	I1217 08:32:13.173517  866074 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7nqi1p.ejost2d3dqegwn4g \
	I1217 08:32:13.173666  866074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 08:32:13.177005  866074 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 08:32:13.177121  866074 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 08:32:13.177175  866074 cni.go:84] Creating CNI manager for ""
	I1217 08:32:13.177196  866074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:13.179613  866074 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1217 08:32:10.797040  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	W1217 08:32:13.297857  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:11.847591  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:32:11.847629  876818 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-225657"
	I1217 08:32:11.847703  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:11.870068  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:11.870172  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:11.870184  876818 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-225657 && echo "default-k8s-diff-port-225657" | sudo tee /etc/hostname
	I1217 08:32:12.017902  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:32:12.017995  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.040970  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:12.041124  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:12.041148  876818 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-225657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-225657/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-225657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:32:12.174812  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:32:12.174846  876818 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:32:12.174878  876818 ubuntu.go:190] setting up certificates
	I1217 08:32:12.174891  876818 provision.go:84] configureAuth start
	I1217 08:32:12.174961  876818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:32:12.195929  876818 provision.go:143] copyHostCerts
	I1217 08:32:12.196007  876818 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:32:12.196020  876818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:32:12.196106  876818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:32:12.196259  876818 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:32:12.196274  876818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:32:12.196320  876818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:32:12.196402  876818 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:32:12.196413  876818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:32:12.196438  876818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:32:12.196495  876818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-225657 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-225657 localhost minikube]
	I1217 08:32:12.298236  876818 provision.go:177] copyRemoteCerts
	I1217 08:32:12.298295  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:32:12.298335  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.318951  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:12.424332  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:32:12.450112  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 08:32:12.470525  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 08:32:12.491813  876818 provision.go:87] duration metric: took 316.905148ms to configureAuth
	I1217 08:32:12.491849  876818 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:32:12.492046  876818 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:12.492151  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.513001  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:12.513125  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:12.513141  876818 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:32:12.803327  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:32:12.803363  876818 machine.go:97] duration metric: took 4.126112041s to provisionDockerMachine
	I1217 08:32:12.803378  876818 client.go:176] duration metric: took 10.996527369s to LocalClient.Create
	I1217 08:32:12.803405  876818 start.go:167] duration metric: took 10.99661651s to libmachine.API.Create "default-k8s-diff-port-225657"
	I1217 08:32:12.803414  876818 start.go:293] postStartSetup for "default-k8s-diff-port-225657" (driver="docker")
	I1217 08:32:12.803428  876818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:32:12.803520  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:32:12.803590  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.822159  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:12.925471  876818 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:32:12.929675  876818 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:32:12.929714  876818 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:32:12.929734  876818 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:32:12.929814  876818 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:32:12.929919  876818 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:32:12.930052  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:32:12.938904  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:12.961125  876818 start.go:296] duration metric: took 157.693442ms for postStartSetup
	I1217 08:32:12.961555  876818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:32:12.982070  876818 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:32:12.982402  876818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:32:12.982460  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:13.002877  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:13.095087  876818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:32:13.100174  876818 start.go:128] duration metric: took 11.296476774s to createHost
	I1217 08:32:13.100209  876818 start.go:83] releasing machines lock for "default-k8s-diff-port-225657", held for 11.296696714s
	I1217 08:32:13.100279  876818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:32:13.119195  876818 ssh_runner.go:195] Run: cat /version.json
	I1217 08:32:13.119271  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:13.119274  876818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:32:13.119342  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:13.139794  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:13.140091  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:13.292825  876818 ssh_runner.go:195] Run: systemctl --version
	I1217 08:32:13.301062  876818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:32:13.347657  876818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:32:13.353086  876818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:32:13.353180  876818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:32:13.386293  876818 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 08:32:13.386324  876818 start.go:496] detecting cgroup driver to use...
	I1217 08:32:13.386363  876818 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:32:13.386440  876818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:32:13.406165  876818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:32:13.421667  876818 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:32:13.421735  876818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:32:13.445063  876818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:32:13.474069  876818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:32:13.589514  876818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:32:13.687876  876818 docker.go:234] disabling docker service ...
	I1217 08:32:13.687948  876818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:32:13.709115  876818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:32:13.725179  876818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:32:13.816070  876818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:32:13.908965  876818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:32:13.922931  876818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:32:13.938488  876818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:32:13.938601  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.949886  876818 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:32:13.949966  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.959623  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.969563  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.980342  876818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:32:13.989685  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.999720  876818 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:14.014863  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:14.024968  876818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:32:14.033477  876818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:32:14.041958  876818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:14.130836  876818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:32:14.324161  876818 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:32:14.324240  876818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:32:14.328783  876818 start.go:564] Will wait 60s for crictl version
	I1217 08:32:14.328842  876818 ssh_runner.go:195] Run: which crictl
	I1217 08:32:14.332732  876818 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:32:14.358741  876818 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:32:14.358828  876818 ssh_runner.go:195] Run: crio --version
	I1217 08:32:14.389865  876818 ssh_runner.go:195] Run: crio --version
	I1217 08:32:14.421345  876818 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 08:32:14.423125  876818 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:32:14.443156  876818 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 08:32:14.448782  876818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:32:14.461614  876818 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:32:14.461796  876818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:32:14.461847  876818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:32:14.497773  876818 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:32:14.497797  876818 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:32:14.497850  876818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:32:14.528137  876818 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:32:14.528160  876818 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:32:14.528168  876818 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1217 08:32:14.528254  876818 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-225657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:32:14.528318  876818 ssh_runner.go:195] Run: crio config
	I1217 08:32:14.584472  876818 cni.go:84] Creating CNI manager for ""
	I1217 08:32:14.584502  876818 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:14.584524  876818 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:32:14.584583  876818 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-225657 NodeName:default-k8s-diff-port-225657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:32:14.584763  876818 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-225657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:32:14.584847  876818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 08:32:14.594854  876818 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:32:14.594919  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:32:14.605478  876818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1217 08:32:14.621822  876818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:32:14.641660  876818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 08:32:14.656519  876818 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:32:14.660626  876818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:32:14.672783  876818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:14.763211  876818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:14.795518  876818 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657 for IP: 192.168.103.2
	I1217 08:32:14.795574  876818 certs.go:195] generating shared ca certs ...
	I1217 08:32:14.795596  876818 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.795767  876818 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:32:14.795826  876818 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:32:14.795840  876818 certs.go:257] generating profile certs ...
	I1217 08:32:14.795954  876818 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key
	I1217 08:32:14.795977  876818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.crt with IP's: []
	I1217 08:32:14.863228  876818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.crt ...
	I1217 08:32:14.863262  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.crt: {Name:mkdcfa20690e66f7711fa7eedb1c17f0013cea3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.863459  876818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key ...
	I1217 08:32:14.863479  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key: {Name:mk0c147f99dbcd9cd0b76dd50dbcc7358fb09eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.863633  876818 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92
	I1217 08:32:14.863658  876818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 08:32:14.926506  876818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92 ...
	I1217 08:32:14.926559  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92: {Name:mkeab2e9787f4fdc822d05ef2a5a31d73807e7a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.926783  876818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92 ...
	I1217 08:32:14.926807  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92: {Name:mk908b8eefd79aa9fd3e47b0e9dd700056cd3a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.926928  876818 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt
	I1217 08:32:14.927054  876818 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key
	I1217 08:32:14.927150  876818 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key
	I1217 08:32:14.927179  876818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt with IP's: []
	I1217 08:32:14.999838  876818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt ...
	I1217 08:32:14.999868  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt: {Name:mk1fe736b631b3578e9134ad8e647a4ce10e1dfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:15.000043  876818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key ...
	I1217 08:32:15.000057  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key: {Name:mkff008ec12026d35b6afe310c5ec1f253ee363c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:15.000226  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:32:15.000264  876818 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:32:15.000274  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:32:15.000297  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:32:15.000320  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:32:15.000412  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:32:15.000466  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:15.001158  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:32:15.022100  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:32:15.041157  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:32:15.060148  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:32:15.083276  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 08:32:15.106435  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:32:15.130968  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:32:15.151120  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 08:32:15.171738  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:32:15.196153  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:32:15.216671  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:32:15.237947  876818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:32:15.252432  876818 ssh_runner.go:195] Run: openssl version
	I1217 08:32:15.259775  876818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.268463  876818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:32:15.277889  876818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.282785  876818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.282840  876818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.321178  876818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:15.332457  876818 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5560552.pem /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:15.342212  876818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.350940  876818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:32:15.359554  876818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.363904  876818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.363983  876818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.405120  876818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:32:15.413813  876818 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 08:32:15.422721  876818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.431742  876818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:32:15.440099  876818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.445063  876818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.445150  876818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.485945  876818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:32:15.495194  876818 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/556055.pem /etc/ssl/certs/51391683.0
	I1217 08:32:15.504455  876818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:32:15.509009  876818 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 08:32:15.509078  876818 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:15.509159  876818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:32:15.509212  876818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:32:15.542000  876818 cri.go:89] found id: ""
	I1217 08:32:15.542078  876818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:32:15.551359  876818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 08:32:15.561709  876818 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 08:32:15.561782  876818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 08:32:15.571765  876818 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 08:32:15.571803  876818 kubeadm.go:158] found existing configuration files:
	
	I1217 08:32:15.571859  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1217 08:32:15.582295  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 08:32:15.582353  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 08:32:15.591491  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1217 08:32:15.600423  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 08:32:15.600486  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 08:32:15.609303  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1217 08:32:15.618950  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 08:32:15.619020  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 08:32:15.628515  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1217 08:32:15.637988  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 08:32:15.638046  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 08:32:15.646553  876818 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 08:32:15.711259  876818 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 08:32:15.775493  876818 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 08:32:13.181565  866074 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:32:13.186365  866074 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1217 08:32:13.186392  866074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:32:13.200333  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:32:13.442509  866074 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:32:13.442697  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:13.442819  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-936988 minikube.k8s.io/updated_at=2025_12_17T08_32_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=no-preload-936988 minikube.k8s.io/primary=true
	I1217 08:32:13.461848  866074 ops.go:34] apiserver oom_adj: -16
	I1217 08:32:13.552148  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:14.053185  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:14.552761  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:15.052803  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:15.552460  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:16.052244  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:16.552431  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:17.052268  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:17.552825  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:17.630473  866074 kubeadm.go:1114] duration metric: took 4.187821582s to wait for elevateKubeSystemPrivileges
	I1217 08:32:17.630512  866074 kubeadm.go:403] duration metric: took 13.943055923s to StartCluster
	I1217 08:32:17.630550  866074 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:17.630631  866074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:17.632218  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:17.632577  866074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:17.632602  866074 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:17.632683  866074 addons.go:70] Setting storage-provisioner=true in profile "no-preload-936988"
	I1217 08:32:17.632702  866074 addons.go:239] Setting addon storage-provisioner=true in "no-preload-936988"
	I1217 08:32:17.632731  866074 host.go:66] Checking if "no-preload-936988" exists ...
	I1217 08:32:17.632780  866074 addons.go:70] Setting default-storageclass=true in profile "no-preload-936988"
	I1217 08:32:17.632811  866074 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-936988"
	I1217 08:32:17.633099  866074 cli_runner.go:164] Run: docker container inspect no-preload-936988 --format={{.State.Status}}
	I1217 08:32:17.632569  866074 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:17.633241  866074 cli_runner.go:164] Run: docker container inspect no-preload-936988 --format={{.State.Status}}
	I1217 08:32:17.633548  866074 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:32:17.634963  866074 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:17.640101  866074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:17.668302  866074 addons.go:239] Setting addon default-storageclass=true in "no-preload-936988"
	I1217 08:32:17.668358  866074 host.go:66] Checking if "no-preload-936988" exists ...
	I1217 08:32:17.668491  866074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:17.668875  866074 cli_runner.go:164] Run: docker container inspect no-preload-936988 --format={{.State.Status}}
	I1217 08:32:17.670105  866074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:17.670126  866074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:17.670199  866074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-936988
	I1217 08:32:17.704878  866074 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/no-preload-936988/id_ed25519 Username:docker}
	I1217 08:32:17.708635  866074 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:17.708697  866074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:17.708784  866074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-936988
	I1217 08:32:17.736140  866074 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/no-preload-936988/id_ed25519 Username:docker}
	I1217 08:32:17.758941  866074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:17.809686  866074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1217 08:32:14.586794  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	W1217 08:32:16.587145  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	I1217 08:32:17.830908  866074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:17.863741  866074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:18.003143  866074 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:18.004756  866074 node_ready.go:35] waiting up to 6m0s for node "no-preload-936988" to be "Ready" ...
	I1217 08:32:18.233677  866074 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 08:32:15.797222  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	W1217 08:32:17.797847  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:19.088257  860032 node_ready.go:49] node "old-k8s-version-640910" is "Ready"
	I1217 08:32:19.088298  860032 node_ready.go:38] duration metric: took 15.506235047s for node "old-k8s-version-640910" to be "Ready" ...
	I1217 08:32:19.088315  860032 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:32:19.088364  860032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:32:19.106742  860032 api_server.go:72] duration metric: took 16.064061733s to wait for apiserver process to appear ...
	I1217 08:32:19.106778  860032 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:32:19.106802  860032 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 08:32:19.113046  860032 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 08:32:19.114637  860032 api_server.go:141] control plane version: v1.28.0
	I1217 08:32:19.114666  860032 api_server.go:131] duration metric: took 7.880626ms to wait for apiserver health ...
	I1217 08:32:19.114680  860032 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:32:19.120583  860032 system_pods.go:59] 8 kube-system pods found
	I1217 08:32:19.120638  860032 system_pods.go:61] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:19.120654  860032 system_pods.go:61] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.120662  860032 system_pods.go:61] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.120680  860032 system_pods.go:61] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.120695  860032 system_pods.go:61] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.120700  860032 system_pods.go:61] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.120706  860032 system_pods.go:61] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.120714  860032 system_pods.go:61] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:19.120729  860032 system_pods.go:74] duration metric: took 6.039419ms to wait for pod list to return data ...
	I1217 08:32:19.120746  860032 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:32:19.123936  860032 default_sa.go:45] found service account: "default"
	I1217 08:32:19.123970  860032 default_sa.go:55] duration metric: took 3.215131ms for default service account to be created ...
	I1217 08:32:19.124052  860032 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:32:19.129828  860032 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:19.129937  860032 system_pods.go:89] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:19.129949  860032 system_pods.go:89] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.129960  860032 system_pods.go:89] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.129965  860032 system_pods.go:89] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.129971  860032 system_pods.go:89] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.129976  860032 system_pods.go:89] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.129980  860032 system_pods.go:89] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.129987  860032 system_pods.go:89] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:19.130015  860032 retry.go:31] will retry after 193.985772ms: missing components: kube-dns
	I1217 08:32:19.330692  860032 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:19.330740  860032 system_pods.go:89] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:19.330752  860032 system_pods.go:89] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.330761  860032 system_pods.go:89] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.330767  860032 system_pods.go:89] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.330772  860032 system_pods.go:89] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.330777  860032 system_pods.go:89] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.330780  860032 system_pods.go:89] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.330784  860032 system_pods.go:89] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:19.330808  860032 retry.go:31] will retry after 264.53787ms: missing components: kube-dns
	I1217 08:32:19.602757  860032 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:19.602794  860032 system_pods.go:89] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Running
	I1217 08:32:19.602803  860032 system_pods.go:89] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.602808  860032 system_pods.go:89] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.602813  860032 system_pods.go:89] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.602818  860032 system_pods.go:89] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.602823  860032 system_pods.go:89] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.602828  860032 system_pods.go:89] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.602833  860032 system_pods.go:89] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Running
	I1217 08:32:19.602844  860032 system_pods.go:126] duration metric: took 478.778338ms to wait for k8s-apps to be running ...
	I1217 08:32:19.602855  860032 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:32:19.602919  860032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:32:19.623074  860032 system_svc.go:56] duration metric: took 20.20768ms WaitForService to wait for kubelet
	I1217 08:32:19.623106  860032 kubeadm.go:587] duration metric: took 16.580433192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:19.623129  860032 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:32:19.626994  860032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:32:19.627046  860032 node_conditions.go:123] node cpu capacity is 8
	I1217 08:32:19.627099  860032 node_conditions.go:105] duration metric: took 3.935608ms to run NodePressure ...
	I1217 08:32:19.627120  860032 start.go:242] waiting for startup goroutines ...
	I1217 08:32:19.627130  860032 start.go:247] waiting for cluster config update ...
	I1217 08:32:19.627144  860032 start.go:256] writing updated cluster config ...
	I1217 08:32:19.627664  860032 ssh_runner.go:195] Run: rm -f paused
	I1217 08:32:19.633357  860032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:19.639945  860032 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mr99d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.647463  860032 pod_ready.go:94] pod "coredns-5dd5756b68-mr99d" is "Ready"
	I1217 08:32:19.647499  860032 pod_ready.go:86] duration metric: took 7.52479ms for pod "coredns-5dd5756b68-mr99d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.652349  860032 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.659423  860032 pod_ready.go:94] pod "etcd-old-k8s-version-640910" is "Ready"
	I1217 08:32:19.659461  860032 pod_ready.go:86] duration metric: took 7.072786ms for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.663294  860032 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.669941  860032 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-640910" is "Ready"
	I1217 08:32:19.669976  860032 pod_ready.go:86] duration metric: took 6.648805ms for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.673979  860032 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.042200  860032 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-640910" is "Ready"
	I1217 08:32:20.042232  860032 pod_ready.go:86] duration metric: took 368.226903ms for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.239616  860032 pod_ready.go:83] waiting for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.638112  860032 pod_ready.go:94] pod "kube-proxy-cwfwr" is "Ready"
	I1217 08:32:20.638140  860032 pod_ready.go:86] duration metric: took 398.494834ms for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.840026  860032 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.239099  860032 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-640910" is "Ready"
	I1217 08:32:21.239132  860032 pod_ready.go:86] duration metric: took 399.059167ms for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.239147  860032 pod_ready.go:40] duration metric: took 1.605741174s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:21.285586  860032 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 08:32:21.338149  860032 out.go:203] 
	W1217 08:32:21.340341  860032 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 08:32:21.342018  860032 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 08:32:21.345623  860032 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-640910" cluster and "default" namespace by default
	W1217 08:32:21.349107  860032 root.go:91] failed to log command end to audit: failed to find a log row with id equals to dd79e1a3-c046-43f1-a071-2f0a5a4d6a1b
	I1217 08:32:18.235063  866074 addons.go:530] duration metric: took 602.463723ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:32:18.509742  866074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-936988" context rescaled to 1 replicas
	W1217 08:32:20.008693  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	W1217 08:32:22.509397  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	W1217 08:32:20.297227  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:20.796895  866708 node_ready.go:49] node "embed-certs-581631" is "Ready"
	I1217 08:32:20.796932  866708 node_ready.go:38] duration metric: took 16.503273535s for node "embed-certs-581631" to be "Ready" ...
	I1217 08:32:20.796952  866708 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:32:20.797007  866708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:32:20.811909  866708 api_server.go:72] duration metric: took 16.874314934s to wait for apiserver process to appear ...
	I1217 08:32:20.811944  866708 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:32:20.811970  866708 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:32:20.817838  866708 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 08:32:20.819086  866708 api_server.go:141] control plane version: v1.34.3
	I1217 08:32:20.819118  866708 api_server.go:131] duration metric: took 7.165561ms to wait for apiserver health ...
	I1217 08:32:20.819129  866708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:32:20.823436  866708 system_pods.go:59] 8 kube-system pods found
	I1217 08:32:20.823477  866708 system_pods.go:61] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:20.823491  866708 system_pods.go:61] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:20.823500  866708 system_pods.go:61] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:20.823506  866708 system_pods.go:61] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:20.823512  866708 system_pods.go:61] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:20.823518  866708 system_pods.go:61] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:20.823523  866708 system_pods.go:61] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:20.823540  866708 system_pods.go:61] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:20.823549  866708 system_pods.go:74] duration metric: took 4.412326ms to wait for pod list to return data ...
	I1217 08:32:20.823559  866708 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:32:20.827902  866708 default_sa.go:45] found service account: "default"
	I1217 08:32:20.827931  866708 default_sa.go:55] duration metric: took 4.364348ms for default service account to be created ...
	I1217 08:32:20.827945  866708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:32:20.924443  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:20.924498  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:20.924512  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:20.924573  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:20.924580  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:20.924586  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:20.924592  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:20.924603  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:20.924611  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:20.924654  866708 retry.go:31] will retry after 243.506417ms: missing components: kube-dns
	I1217 08:32:21.172665  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:21.172712  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:21.172718  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:21.172723  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:21.172728  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:21.172732  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:21.172735  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:21.172738  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:21.172743  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:21.172760  866708 retry.go:31] will retry after 326.410198ms: missing components: kube-dns
	I1217 08:32:21.506028  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:21.506083  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:21.506094  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:21.506101  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:21.506107  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:21.506115  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:21.506121  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:21.506126  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:21.506147  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:21.506169  866708 retry.go:31] will retry after 400.365348ms: missing components: kube-dns
	I1217 08:32:21.911225  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:21.911348  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Running
	I1217 08:32:21.911362  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:21.911368  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:21.911373  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:21.911381  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:21.911386  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:21.911392  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:21.911396  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Running
	I1217 08:32:21.911415  866708 system_pods.go:126] duration metric: took 1.083462108s to wait for k8s-apps to be running ...
	I1217 08:32:21.911427  866708 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:32:21.911486  866708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:32:21.926549  866708 system_svc.go:56] duration metric: took 15.103695ms WaitForService to wait for kubelet
	I1217 08:32:21.926585  866708 kubeadm.go:587] duration metric: took 17.988996239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:21.926608  866708 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:32:21.929905  866708 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:32:21.929939  866708 node_conditions.go:123] node cpu capacity is 8
	I1217 08:32:21.929959  866708 node_conditions.go:105] duration metric: took 3.345146ms to run NodePressure ...
	I1217 08:32:21.929987  866708 start.go:242] waiting for startup goroutines ...
	I1217 08:32:21.929998  866708 start.go:247] waiting for cluster config update ...
	I1217 08:32:21.930013  866708 start.go:256] writing updated cluster config ...
	I1217 08:32:21.930341  866708 ssh_runner.go:195] Run: rm -f paused
	I1217 08:32:21.935015  866708 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:21.939503  866708 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p7sqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.944458  866708 pod_ready.go:94] pod "coredns-66bc5c9577-p7sqj" is "Ready"
	I1217 08:32:21.944489  866708 pod_ready.go:86] duration metric: took 4.957519ms for pod "coredns-66bc5c9577-p7sqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.946799  866708 pod_ready.go:83] waiting for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.950990  866708 pod_ready.go:94] pod "etcd-embed-certs-581631" is "Ready"
	I1217 08:32:21.951012  866708 pod_ready.go:86] duration metric: took 4.188719ms for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.952992  866708 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.956931  866708 pod_ready.go:94] pod "kube-apiserver-embed-certs-581631" is "Ready"
	I1217 08:32:21.956954  866708 pod_ready.go:86] duration metric: took 3.940004ms for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.958889  866708 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:22.340125  866708 pod_ready.go:94] pod "kube-controller-manager-embed-certs-581631" is "Ready"
	I1217 08:32:22.340165  866708 pod_ready.go:86] duration metric: took 381.252466ms for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:22.539721  866708 pod_ready.go:83] waiting for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:22.939424  866708 pod_ready.go:94] pod "kube-proxy-7z26t" is "Ready"
	I1217 08:32:22.939452  866708 pod_ready.go:86] duration metric: took 399.692811ms for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:23.140192  866708 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:23.540426  866708 pod_ready.go:94] pod "kube-scheduler-embed-certs-581631" is "Ready"
	I1217 08:32:23.540465  866708 pod_ready.go:86] duration metric: took 400.236944ms for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:23.540484  866708 pod_ready.go:40] duration metric: took 1.60543256s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:23.588350  866708 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:32:23.590603  866708 out.go:179] * Done! kubectl is now configured to use "embed-certs-581631" cluster and "default" namespace by default
	I1217 08:32:26.556175  876818 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 08:32:26.556252  876818 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 08:32:26.556377  876818 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 08:32:26.556450  876818 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 08:32:26.556515  876818 kubeadm.go:319] OS: Linux
	I1217 08:32:26.556622  876818 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 08:32:26.556686  876818 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 08:32:26.556759  876818 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 08:32:26.556827  876818 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 08:32:26.556897  876818 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 08:32:26.556963  876818 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 08:32:26.557031  876818 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 08:32:26.557094  876818 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 08:32:26.557191  876818 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 08:32:26.557272  876818 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 08:32:26.557426  876818 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 08:32:26.557524  876818 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 08:32:26.559101  876818 out.go:252]   - Generating certificates and keys ...
	I1217 08:32:26.559206  876818 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 08:32:26.559306  876818 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 08:32:26.559404  876818 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 08:32:26.559494  876818 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 08:32:26.559588  876818 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 08:32:26.559669  876818 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 08:32:26.559768  876818 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 08:32:26.559896  876818 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-225657 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 08:32:26.559944  876818 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 08:32:26.560063  876818 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-225657 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 08:32:26.560119  876818 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 08:32:26.560192  876818 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 08:32:26.560248  876818 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 08:32:26.560305  876818 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 08:32:26.560354  876818 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 08:32:26.560409  876818 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 08:32:26.560458  876818 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 08:32:26.560520  876818 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 08:32:26.560589  876818 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 08:32:26.560688  876818 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 08:32:26.560784  876818 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 08:32:26.563576  876818 out.go:252]   - Booting up control plane ...
	I1217 08:32:26.563704  876818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 08:32:26.563826  876818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 08:32:26.563906  876818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 08:32:26.564009  876818 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 08:32:26.564152  876818 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 08:32:26.564278  876818 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 08:32:26.564399  876818 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 08:32:26.564441  876818 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 08:32:26.564577  876818 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 08:32:26.564713  876818 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 08:32:26.564805  876818 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001759685s
	I1217 08:32:26.564919  876818 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 08:32:26.565037  876818 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1217 08:32:26.565141  876818 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 08:32:26.565222  876818 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 08:32:26.565289  876818 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.014387076s
	I1217 08:32:26.565342  876818 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.356564571s
	I1217 08:32:26.565447  876818 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00264337s
	I1217 08:32:26.565610  876818 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 08:32:26.565736  876818 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 08:32:26.565800  876818 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 08:32:26.565982  876818 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-225657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 08:32:26.566041  876818 kubeadm.go:319] [bootstrap-token] Using token: 5amo5u.ea0ubedundw2l43g
	I1217 08:32:26.567706  876818 out.go:252]   - Configuring RBAC rules ...
	I1217 08:32:26.567799  876818 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 08:32:26.567870  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 08:32:26.567982  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 08:32:26.568087  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 08:32:26.568181  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 08:32:26.568278  876818 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 08:32:26.568368  876818 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 08:32:26.568412  876818 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 08:32:26.568453  876818 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 08:32:26.568458  876818 kubeadm.go:319] 
	I1217 08:32:26.568525  876818 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 08:32:26.568544  876818 kubeadm.go:319] 
	I1217 08:32:26.568624  876818 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 08:32:26.568631  876818 kubeadm.go:319] 
	I1217 08:32:26.568650  876818 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 08:32:26.568715  876818 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 08:32:26.568761  876818 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 08:32:26.568765  876818 kubeadm.go:319] 
	I1217 08:32:26.568806  876818 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 08:32:26.568811  876818 kubeadm.go:319] 
	I1217 08:32:26.568848  876818 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 08:32:26.568853  876818 kubeadm.go:319] 
	I1217 08:32:26.568944  876818 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 08:32:26.569079  876818 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 08:32:26.569187  876818 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 08:32:26.569197  876818 kubeadm.go:319] 
	I1217 08:32:26.569320  876818 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 08:32:26.569417  876818 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 08:32:26.569424  876818 kubeadm.go:319] 
	I1217 08:32:26.569494  876818 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 5amo5u.ea0ubedundw2l43g \
	I1217 08:32:26.569666  876818 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 08:32:26.569717  876818 kubeadm.go:319] 	--control-plane 
	I1217 08:32:26.569727  876818 kubeadm.go:319] 
	I1217 08:32:26.569859  876818 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 08:32:26.569871  876818 kubeadm.go:319] 
	I1217 08:32:26.569979  876818 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 5amo5u.ea0ubedundw2l43g \
	I1217 08:32:26.570124  876818 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 08:32:26.570137  876818 cni.go:84] Creating CNI manager for ""
	I1217 08:32:26.570148  876818 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:26.572044  876818 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1217 08:32:25.008774  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	W1217 08:32:27.508833  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	I1217 08:32:26.573679  876818 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:32:26.578351  876818 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 08:32:26.578369  876818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:32:26.593905  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:32:26.826336  876818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:32:26.826621  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:26.826778  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-225657 minikube.k8s.io/updated_at=2025_12_17T08_32_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=default-k8s-diff-port-225657 minikube.k8s.io/primary=true
	I1217 08:32:26.840005  876818 ops.go:34] apiserver oom_adj: -16
	I1217 08:32:26.926850  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:27.427442  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:27.927518  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:28.427663  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:28.927809  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:29.427030  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:29.927781  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:30.427782  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:30.927463  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:31.427777  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:31.512412  876818 kubeadm.go:1114] duration metric: took 4.685935242s to wait for elevateKubeSystemPrivileges
	I1217 08:32:31.512448  876818 kubeadm.go:403] duration metric: took 16.003374717s to StartCluster
	I1217 08:32:31.512469  876818 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:31.512588  876818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:31.515589  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:31.515897  876818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:31.515919  876818 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:31.515990  876818 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:31.516136  876818 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-225657"
	I1217 08:32:31.516147  876818 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:31.516165  876818 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-225657"
	I1217 08:32:31.516199  876818 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-225657"
	I1217 08:32:31.516206  876818 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:32:31.516231  876818 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-225657"
	I1217 08:32:31.516689  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:31.516803  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:31.518011  876818 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:31.521771  876818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:31.550749  876818 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:31.552521  876818 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:31.552580  876818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:31.552645  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:31.562897  876818 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-225657"
	I1217 08:32:31.563005  876818 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:32:31.563559  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:31.601968  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:31.620523  876818 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:31.620567  876818 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:31.620636  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:31.653313  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:31.664324  876818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:31.738146  876818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:31.779787  876818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:31.828233  876818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:31.985910  876818 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:31.988599  876818 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-225657" to be "Ready" ...
	I1217 08:32:32.201458  876818 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 08:32:30.008823  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	I1217 08:32:31.508360  866074 node_ready.go:49] node "no-preload-936988" is "Ready"
	I1217 08:32:31.508395  866074 node_ready.go:38] duration metric: took 13.503607631s for node "no-preload-936988" to be "Ready" ...
	I1217 08:32:31.508411  866074 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:32:31.508466  866074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:32:31.526358  866074 api_server.go:72] duration metric: took 13.89313601s to wait for apiserver process to appear ...
	I1217 08:32:31.526391  866074 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:32:31.526418  866074 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 08:32:31.533062  866074 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 08:32:31.535433  866074 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 08:32:31.535473  866074 api_server.go:131] duration metric: took 9.073454ms to wait for apiserver health ...
	I1217 08:32:31.535486  866074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:32:31.547735  866074 system_pods.go:59] 8 kube-system pods found
	I1217 08:32:31.547786  866074 system_pods.go:61] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:31.547795  866074 system_pods.go:61] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:31.547804  866074 system_pods.go:61] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:31.547810  866074 system_pods.go:61] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:31.547819  866074 system_pods.go:61] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:31.547824  866074 system_pods.go:61] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:31.547830  866074 system_pods.go:61] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:31.547841  866074 system_pods.go:61] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:31.547922  866074 system_pods.go:74] duration metric: took 12.356025ms to wait for pod list to return data ...
	I1217 08:32:31.547959  866074 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:32:31.554847  866074 default_sa.go:45] found service account: "default"
	I1217 08:32:31.554884  866074 default_sa.go:55] duration metric: took 6.915973ms for default service account to be created ...
	I1217 08:32:31.554895  866074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:32:31.561162  866074 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:31.561207  866074 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:31.561216  866074 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:31.561225  866074 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:31.561233  866074 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:31.561238  866074 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:31.561243  866074 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:31.561247  866074 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:31.561254  866074 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:31.561296  866074 retry.go:31] will retry after 301.322132ms: missing components: kube-dns
	I1217 08:32:31.871587  866074 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:31.871634  866074 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:31.871641  866074 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:31.871650  866074 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:31.871656  866074 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:31.871663  866074 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:31.871670  866074 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:31.871675  866074 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:31.871683  866074 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:31.871702  866074 retry.go:31] will retry after 269.277981ms: missing components: kube-dns
	I1217 08:32:32.145106  866074 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:32.145140  866074 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:32.145147  866074 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:32.145154  866074 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:32.145157  866074 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:32.145161  866074 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:32.145165  866074 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:32.145168  866074 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:32.145175  866074 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:32.145196  866074 retry.go:31] will retry after 310.631471ms: missing components: kube-dns
	I1217 08:32:32.462205  866074 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:32.462251  866074 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:32.462261  866074 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:32.462270  866074 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:32.462275  866074 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:32.462281  866074 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:32.462285  866074 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:32.462291  866074 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:32.462299  866074 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:32.462326  866074 retry.go:31] will retry after 522.584802ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Dec 17 08:32:20 embed-certs-581631 crio[771]: time="2025-12-17T08:32:20.819775992Z" level=info msg="Starting container: 361d665814fde105a8517bd5c5c327058a18b36e6c3b1685020ca41eff3b0ced" id=b1cf4a63-5c7a-4543-a209-8b97fb837a20 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:32:20 embed-certs-581631 crio[771]: time="2025-12-17T08:32:20.822103624Z" level=info msg="Started container" PID=1882 containerID=361d665814fde105a8517bd5c5c327058a18b36e6c3b1685020ca41eff3b0ced description=kube-system/coredns-66bc5c9577-p7sqj/coredns id=b1cf4a63-5c7a-4543-a209-8b97fb837a20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=74424d02a47e2e982949b310563966f81d265d4523e5c9dc457abca408e06199
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.058473248Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4a9b42bc-d9b7-4c3a-b1a8-2e732fc5bb8f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.058571085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.06600887Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:526774300ceb1487daa1d958af326c5286d80eea1f6eabd2f65e9e8b37fc8973 UID:8cd6be07-b866-4ffa-92b9-52467bb7e162 NetNS:/var/run/netns/c5760c21-729f-4e8d-beaa-d71e5e401cfb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000504270}] Aliases:map[]}"
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.066049217Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.078895443Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:526774300ceb1487daa1d958af326c5286d80eea1f6eabd2f65e9e8b37fc8973 UID:8cd6be07-b866-4ffa-92b9-52467bb7e162 NetNS:/var/run/netns/c5760c21-729f-4e8d-beaa-d71e5e401cfb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000504270}] Aliases:map[]}"
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.079079309Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.083112545Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.084271834Z" level=info msg="Ran pod sandbox 526774300ceb1487daa1d958af326c5286d80eea1f6eabd2f65e9e8b37fc8973 with infra container: default/busybox/POD" id=4a9b42bc-d9b7-4c3a-b1a8-2e732fc5bb8f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.085749321Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=617a622a-cc83-4f61-b345-d8cae5ea981b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.085884484Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=617a622a-cc83-4f61-b345-d8cae5ea981b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.085935937Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=617a622a-cc83-4f61-b345-d8cae5ea981b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.086599825Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=29d32eef-0c20-42d1-98da-dcd69fe61894 name=/runtime.v1.ImageService/PullImage
	Dec 17 08:32:24 embed-certs-581631 crio[771]: time="2025-12-17T08:32:24.088476258Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 08:32:26 embed-certs-581631 crio[771]: time="2025-12-17T08:32:26.358315637Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=29d32eef-0c20-42d1-98da-dcd69fe61894 name=/runtime.v1.ImageService/PullImage
	Dec 17 08:32:26 embed-certs-581631 crio[771]: time="2025-12-17T08:32:26.358992485Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7ed50c7b-c810-4706-baef-d4c46c3ff5a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:26 embed-certs-581631 crio[771]: time="2025-12-17T08:32:26.360358788Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=12e9ac65-9fa6-4501-befc-700a9bd64b9a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:26 embed-certs-581631 crio[771]: time="2025-12-17T08:32:26.364143574Z" level=info msg="Creating container: default/busybox/busybox" id=d9515c05-191b-4190-8260-493657a39e69 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:32:26 embed-certs-581631 crio[771]: time="2025-12-17T08:32:26.364270708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:26 embed-certs-581631 crio[771]: time="2025-12-17T08:32:26.368281841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:26 embed-certs-581631 crio[771]: time="2025-12-17T08:32:26.368727671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:26 embed-certs-581631 crio[771]: time="2025-12-17T08:32:26.397938734Z" level=info msg="Created container 989ad9f23df107402e64277b7c0df19fb21286e2444cdaf0915445f07d6692fd: default/busybox/busybox" id=d9515c05-191b-4190-8260-493657a39e69 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:32:26 embed-certs-581631 crio[771]: time="2025-12-17T08:32:26.398616345Z" level=info msg="Starting container: 989ad9f23df107402e64277b7c0df19fb21286e2444cdaf0915445f07d6692fd" id=c7e5046c-f96b-4413-bf27-95d29ebc1c20 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:32:26 embed-certs-581631 crio[771]: time="2025-12-17T08:32:26.400796724Z" level=info msg="Started container" PID=1961 containerID=989ad9f23df107402e64277b7c0df19fb21286e2444cdaf0915445f07d6692fd description=default/busybox/busybox id=c7e5046c-f96b-4413-bf27-95d29ebc1c20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=526774300ceb1487daa1d958af326c5286d80eea1f6eabd2f65e9e8b37fc8973
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	989ad9f23df10       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   526774300ceb1       busybox                                      default
	361d665814fde       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   74424d02a47e2       coredns-66bc5c9577-p7sqj                     kube-system
	d2146364427f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   c0380d7fe4dfc       storage-provisioner                          kube-system
	6d9a0ff7779ba       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   3fcbdfda86422       kindnet-wv7n7                                kube-system
	bb04b4c4c2f65       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      28 seconds ago      Running             kube-proxy                0                   7e3c6ced40172       kube-proxy-7z26t                             kube-system
	880074b61761b       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      38 seconds ago      Running             kube-apiserver            0                   041e9e0416b4a       kube-apiserver-embed-certs-581631            kube-system
	fbdb46ba43bad       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      38 seconds ago      Running             etcd                      0                   3e12f1e606401       etcd-embed-certs-581631                      kube-system
	78daa1049814b       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      38 seconds ago      Running             kube-controller-manager   0                   f1ba11463edc5       kube-controller-manager-embed-certs-581631   kube-system
	ea567c67f11de       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      38 seconds ago      Running             kube-scheduler            0                   aa6155a6ff484       kube-scheduler-embed-certs-581631            kube-system
	
	
	==> coredns [361d665814fde105a8517bd5c5c327058a18b36e6c3b1685020ca41eff3b0ced] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49490 - 32367 "HINFO IN 7850444992835679692.2184466144001191728. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044251084s
	
	
	==> describe nodes <==
	Name:               embed-certs-581631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-581631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=embed-certs-581631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_32_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:31:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-581631
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:32:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:32:30 +0000   Wed, 17 Dec 2025 08:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:32:30 +0000   Wed, 17 Dec 2025 08:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:32:30 +0000   Wed, 17 Dec 2025 08:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:32:30 +0000   Wed, 17 Dec 2025 08:32:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-581631
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                54d7a8c2-691a-45c0-b4a2-f9840ad8416b
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-p7sqj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-embed-certs-581631                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-wv7n7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-embed-certs-581631             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-embed-certs-581631    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-7z26t                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-embed-certs-581631             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node embed-certs-581631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node embed-certs-581631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node embed-certs-581631 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node embed-certs-581631 event: Registered Node embed-certs-581631 in Controller
	  Normal  NodeReady                13s   kubelet          Node embed-certs-581631 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [fbdb46ba43badb6b5f94633dbbbf9dee3faf8cdac0963fbcf938d0678809835c] <==
	{"level":"warn","ts":"2025-12-17T08:31:55.977515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:55.988656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:55.997013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.010750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.020916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.029303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.037762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.044921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.051863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.063715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.071053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.079639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.088112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.096326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.103486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.112893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.121287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.129112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.137315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.145623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.153044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.167840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.177017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.185112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:31:56.258654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50720","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:32:33 up  2:14,  0 user,  load average: 5.55, 3.89, 2.70
	Linux embed-certs-581631 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6d9a0ff7779ba66b1692e84310e648feabe3afb451e0d153270096cbdd97d9ed] <==
	I1217 08:32:09.661362       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:32:09.661965       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 08:32:09.662159       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:32:09.662181       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:32:09.662206       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:32:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:32:09.960835       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:32:09.960873       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:32:09.960886       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:32:09.961072       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:32:10.287508       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:32:10.387394       1 metrics.go:72] Registering metrics
	I1217 08:32:10.387645       1 controller.go:711] "Syncing nftables rules"
	I1217 08:32:19.963627       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:32:19.963673       1 main.go:301] handling current node
	I1217 08:32:29.962668       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:32:29.962717       1 main.go:301] handling current node
	
	
	==> kube-apiserver [880074b61761b1af74574aba7d317d8bec487615ac11c45bee2079473d1a4127] <==
	I1217 08:31:56.805619       1 policy_source.go:240] refreshing policies
	E1217 08:31:56.850028       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1217 08:31:56.897769       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:31:56.908020       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 08:31:56.908351       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:31:56.923218       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:31:56.923364       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 08:31:56.992704       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:31:57.700527       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 08:31:57.705002       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 08:31:57.705023       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:31:58.392896       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:31:58.440913       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:31:58.510130       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 08:31:58.518625       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1217 08:31:58.520182       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:31:58.526726       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:31:58.719566       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:31:59.592342       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:31:59.605416       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 08:31:59.616094       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 08:32:04.623903       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:32:04.724598       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 08:32:04.786361       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:32:04.793702       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [78daa1049814b87f292be7a002fe8070a7ed6baa3475480e46373577b7c1da82] <==
	I1217 08:32:03.718567       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 08:32:03.718587       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 08:32:03.718909       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 08:32:03.719442       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 08:32:03.719585       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 08:32:03.719507       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 08:32:03.719724       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-581631"
	I1217 08:32:03.720233       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 08:32:03.720670       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 08:32:03.720680       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 08:32:03.721381       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 08:32:03.721719       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 08:32:03.722765       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 08:32:03.723869       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 08:32:03.724987       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 08:32:03.725135       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 08:32:03.726212       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:32:03.726346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:32:03.728565       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 08:32:03.729877       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 08:32:03.737452       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 08:32:03.744910       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 08:32:03.752125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 08:32:03.768191       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:32:23.722963       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bb04b4c4c2f6556bf793013514ad4da8ccf1100d265dde1dec0129a20912245c] <==
	I1217 08:32:05.216741       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:32:05.302513       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:32:05.403753       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:32:05.403804       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 08:32:05.403921       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:32:05.429430       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:32:05.429499       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:32:05.435907       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:32:05.436745       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:32:05.436768       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:32:05.438468       1 config.go:200] "Starting service config controller"
	I1217 08:32:05.438480       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:32:05.438499       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:32:05.438505       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:32:05.438519       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:32:05.438524       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:32:05.438561       1 config.go:309] "Starting node config controller"
	I1217 08:32:05.438566       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:32:05.538999       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:32:05.538997       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:32:05.539044       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:32:05.539054       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ea567c67f11de6ef74b5715627059a246b2299ffd38831c614382854ab1569b2] <==
	E1217 08:31:56.746230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 08:31:56.746340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 08:31:56.746799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 08:31:56.746453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 08:31:56.746567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 08:31:56.746576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 08:31:56.746738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 08:31:56.746801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 08:31:56.746554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 08:31:56.746986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 08:31:57.606367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 08:31:57.646085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 08:31:57.650517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 08:31:57.663106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 08:31:57.704616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 08:31:57.740316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 08:31:57.870336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 08:31:57.886742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 08:31:57.899890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 08:31:57.943581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 08:31:57.956199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 08:31:57.991710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 08:31:58.035925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 08:31:58.084887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1217 08:32:00.843848       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:32:00 embed-certs-581631 kubelet[1332]: I1217 08:32:00.572176    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-581631" podStartSLOduration=2.5721449339999998 podStartE2EDuration="2.572144934s" podCreationTimestamp="2025-12-17 08:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:00.552978546 +0000 UTC m=+1.214580238" watchObservedRunningTime="2025-12-17 08:32:00.572144934 +0000 UTC m=+1.233746616"
	Dec 17 08:32:00 embed-certs-581631 kubelet[1332]: I1217 08:32:00.591851    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-581631" podStartSLOduration=1.5918223889999998 podStartE2EDuration="1.591822389s" podCreationTimestamp="2025-12-17 08:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:00.573161624 +0000 UTC m=+1.234763304" watchObservedRunningTime="2025-12-17 08:32:00.591822389 +0000 UTC m=+1.253424061"
	Dec 17 08:32:00 embed-certs-581631 kubelet[1332]: I1217 08:32:00.604690    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-581631" podStartSLOduration=1.604669683 podStartE2EDuration="1.604669683s" podCreationTimestamp="2025-12-17 08:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:00.592476453 +0000 UTC m=+1.254078133" watchObservedRunningTime="2025-12-17 08:32:00.604669683 +0000 UTC m=+1.266271364"
	Dec 17 08:32:00 embed-certs-581631 kubelet[1332]: I1217 08:32:00.605012    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-581631" podStartSLOduration=1.604994825 podStartE2EDuration="1.604994825s" podCreationTimestamp="2025-12-17 08:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:00.604454913 +0000 UTC m=+1.266056583" watchObservedRunningTime="2025-12-17 08:32:00.604994825 +0000 UTC m=+1.266596506"
	Dec 17 08:32:03 embed-certs-581631 kubelet[1332]: I1217 08:32:03.708162    1332 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 08:32:03 embed-certs-581631 kubelet[1332]: I1217 08:32:03.709030    1332 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 08:32:04 embed-certs-581631 kubelet[1332]: I1217 08:32:04.864088    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c471e5c-829a-440a-b333-19c2b3695fa8-lib-modules\") pod \"kindnet-wv7n7\" (UID: \"6c471e5c-829a-440a-b333-19c2b3695fa8\") " pod="kube-system/kindnet-wv7n7"
	Dec 17 08:32:04 embed-certs-581631 kubelet[1332]: I1217 08:32:04.864174    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6c471e5c-829a-440a-b333-19c2b3695fa8-cni-cfg\") pod \"kindnet-wv7n7\" (UID: \"6c471e5c-829a-440a-b333-19c2b3695fa8\") " pod="kube-system/kindnet-wv7n7"
	Dec 17 08:32:04 embed-certs-581631 kubelet[1332]: I1217 08:32:04.864221    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvw7l\" (UniqueName: \"kubernetes.io/projected/6c471e5c-829a-440a-b333-19c2b3695fa8-kube-api-access-fvw7l\") pod \"kindnet-wv7n7\" (UID: \"6c471e5c-829a-440a-b333-19c2b3695fa8\") " pod="kube-system/kindnet-wv7n7"
	Dec 17 08:32:04 embed-certs-581631 kubelet[1332]: I1217 08:32:04.864257    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95fe0d1a-5b68-4884-8189-458f93ba38e2-xtables-lock\") pod \"kube-proxy-7z26t\" (UID: \"95fe0d1a-5b68-4884-8189-458f93ba38e2\") " pod="kube-system/kube-proxy-7z26t"
	Dec 17 08:32:04 embed-certs-581631 kubelet[1332]: I1217 08:32:04.864288    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx7nh\" (UniqueName: \"kubernetes.io/projected/95fe0d1a-5b68-4884-8189-458f93ba38e2-kube-api-access-dx7nh\") pod \"kube-proxy-7z26t\" (UID: \"95fe0d1a-5b68-4884-8189-458f93ba38e2\") " pod="kube-system/kube-proxy-7z26t"
	Dec 17 08:32:04 embed-certs-581631 kubelet[1332]: I1217 08:32:04.864318    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c471e5c-829a-440a-b333-19c2b3695fa8-xtables-lock\") pod \"kindnet-wv7n7\" (UID: \"6c471e5c-829a-440a-b333-19c2b3695fa8\") " pod="kube-system/kindnet-wv7n7"
	Dec 17 08:32:04 embed-certs-581631 kubelet[1332]: I1217 08:32:04.864344    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/95fe0d1a-5b68-4884-8189-458f93ba38e2-kube-proxy\") pod \"kube-proxy-7z26t\" (UID: \"95fe0d1a-5b68-4884-8189-458f93ba38e2\") " pod="kube-system/kube-proxy-7z26t"
	Dec 17 08:32:04 embed-certs-581631 kubelet[1332]: I1217 08:32:04.864374    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95fe0d1a-5b68-4884-8189-458f93ba38e2-lib-modules\") pod \"kube-proxy-7z26t\" (UID: \"95fe0d1a-5b68-4884-8189-458f93ba38e2\") " pod="kube-system/kube-proxy-7z26t"
	Dec 17 08:32:05 embed-certs-581631 kubelet[1332]: I1217 08:32:05.496187    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7z26t" podStartSLOduration=1.4961638050000001 podStartE2EDuration="1.496163805s" podCreationTimestamp="2025-12-17 08:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:05.495783804 +0000 UTC m=+6.157385484" watchObservedRunningTime="2025-12-17 08:32:05.496163805 +0000 UTC m=+6.157765484"
	Dec 17 08:32:09 embed-certs-581631 kubelet[1332]: I1217 08:32:09.513688    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wv7n7" podStartSLOduration=1.242704762 podStartE2EDuration="5.51366039s" podCreationTimestamp="2025-12-17 08:32:04 +0000 UTC" firstStartedPulling="2025-12-17 08:32:05.071134297 +0000 UTC m=+5.732735957" lastFinishedPulling="2025-12-17 08:32:09.342089909 +0000 UTC m=+10.003691585" observedRunningTime="2025-12-17 08:32:09.51355687 +0000 UTC m=+10.175158550" watchObservedRunningTime="2025-12-17 08:32:09.51366039 +0000 UTC m=+10.175262070"
	Dec 17 08:32:20 embed-certs-581631 kubelet[1332]: I1217 08:32:20.421202    1332 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 08:32:20 embed-certs-581631 kubelet[1332]: I1217 08:32:20.578893    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62629b09-13d5-43fe-bce3-59d9c9a73f8e-config-volume\") pod \"coredns-66bc5c9577-p7sqj\" (UID: \"62629b09-13d5-43fe-bce3-59d9c9a73f8e\") " pod="kube-system/coredns-66bc5c9577-p7sqj"
	Dec 17 08:32:20 embed-certs-581631 kubelet[1332]: I1217 08:32:20.579652    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwqmh\" (UniqueName: \"kubernetes.io/projected/62629b09-13d5-43fe-bce3-59d9c9a73f8e-kube-api-access-pwqmh\") pod \"coredns-66bc5c9577-p7sqj\" (UID: \"62629b09-13d5-43fe-bce3-59d9c9a73f8e\") " pod="kube-system/coredns-66bc5c9577-p7sqj"
	Dec 17 08:32:20 embed-certs-581631 kubelet[1332]: I1217 08:32:20.579723    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q575r\" (UniqueName: \"kubernetes.io/projected/c0c0f2e6-244f-4a01-9660-dcbec60b1c1f-kube-api-access-q575r\") pod \"storage-provisioner\" (UID: \"c0c0f2e6-244f-4a01-9660-dcbec60b1c1f\") " pod="kube-system/storage-provisioner"
	Dec 17 08:32:20 embed-certs-581631 kubelet[1332]: I1217 08:32:20.579759    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c0c0f2e6-244f-4a01-9660-dcbec60b1c1f-tmp\") pod \"storage-provisioner\" (UID: \"c0c0f2e6-244f-4a01-9660-dcbec60b1c1f\") " pod="kube-system/storage-provisioner"
	Dec 17 08:32:21 embed-certs-581631 kubelet[1332]: I1217 08:32:21.543157    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p7sqj" podStartSLOduration=17.543134531 podStartE2EDuration="17.543134531s" podCreationTimestamp="2025-12-17 08:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:21.542830364 +0000 UTC m=+22.204432057" watchObservedRunningTime="2025-12-17 08:32:21.543134531 +0000 UTC m=+22.204736211"
	Dec 17 08:32:21 embed-certs-581631 kubelet[1332]: I1217 08:32:21.556093    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.556071084 podStartE2EDuration="17.556071084s" podCreationTimestamp="2025-12-17 08:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:21.555971906 +0000 UTC m=+22.217573586" watchObservedRunningTime="2025-12-17 08:32:21.556071084 +0000 UTC m=+22.217672761"
	Dec 17 08:32:23 embed-certs-581631 kubelet[1332]: I1217 08:32:23.801189    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khd9d\" (UniqueName: \"kubernetes.io/projected/8cd6be07-b866-4ffa-92b9-52467bb7e162-kube-api-access-khd9d\") pod \"busybox\" (UID: \"8cd6be07-b866-4ffa-92b9-52467bb7e162\") " pod="default/busybox"
	Dec 17 08:32:26 embed-certs-581631 kubelet[1332]: I1217 08:32:26.557131    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.283454624 podStartE2EDuration="3.557104538s" podCreationTimestamp="2025-12-17 08:32:23 +0000 UTC" firstStartedPulling="2025-12-17 08:32:24.086164127 +0000 UTC m=+24.747765794" lastFinishedPulling="2025-12-17 08:32:26.359814045 +0000 UTC m=+27.021415708" observedRunningTime="2025-12-17 08:32:26.556863398 +0000 UTC m=+27.218465075" watchObservedRunningTime="2025-12-17 08:32:26.557104538 +0000 UTC m=+27.218706218"
	
	
	==> storage-provisioner [d2146364427f734cff6cf0093152bcf8df5ae97519fb4b45e43e0782c466f4a9] <==
	I1217 08:32:20.826940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:32:20.837031       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:32:20.837091       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:32:20.840560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:20.846526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:32:20.846765       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:32:20.846901       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79bb53cc-560e-4cfd-b5ff-3872574557fe", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-581631_1fe719c4-c578-4c1f-82c0-66872107cee2 became leader
	I1217 08:32:20.846971       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-581631_1fe719c4-c578-4c1f-82c0-66872107cee2!
	W1217 08:32:20.849308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:20.854971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:32:20.948100       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-581631_1fe719c4-c578-4c1f-82c0-66872107cee2!
	W1217 08:32:22.858877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:22.864016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:24.867115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:24.871762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:26.876127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:26.881437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:28.885064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:28.890590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:30.894341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:30.899036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:32.902427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:32.908160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-581631 -n embed-certs-581631
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-581631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (251.578553ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:32:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-936988 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-936988 describe deploy/metrics-server -n kube-system: exit status 1 (57.935579ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-936988 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-936988
helpers_test.go:244: (dbg) docker inspect no-preload-936988:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2",
	        "Created": "2025-12-17T08:31:34.254013653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 867018,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:31:34.3163651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/hosts",
	        "LogPath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2-json.log",
	        "Name": "/no-preload-936988",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-936988:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-936988",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2",
	                "LowerDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-936988",
	                "Source": "/var/lib/docker/volumes/no-preload-936988/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-936988",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-936988",
	                "name.minikube.sigs.k8s.io": "no-preload-936988",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1faa916ef884b6514345da51bedf10ced51b9a04bdff5cd8a9d4b971269b0385",
	            "SandboxKey": "/var/run/docker/netns/1faa916ef884",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-936988": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31552e72b7c34000bc246afc13fd33f7afa373a22fe9db1908bd57c2a71027fe",
	                    "EndpointID": "544a1082705444c13162194f8eb5553d5f177a85814b5cbaec521ff71ae5fce0",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "2a:60:24:a5:91:71",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-936988",
	                        "80dedce31e64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-936988 -n no-preload-936988
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-936988 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-936988 logs -n 25: (1.115058814s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-055130 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo docker system info                                                                                                                                 │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cri-dockerd --version                                                                                                                              │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo containerd config dump                                                                                                                             │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo crio config                                                                                                                                        │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ delete  │ -p bridge-055130                                                                                                                                                         │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-606497                                                                                                                                          │ disable-driver-mounts-606497 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-581631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p old-k8s-version-640910 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p embed-certs-581631 --alsologtostderr -v=3                                                                                                                             │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:32:01
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:32:01.552734  876818 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:32:01.553099  876818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:32:01.553114  876818 out.go:374] Setting ErrFile to fd 2...
	I1217 08:32:01.553121  876818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:32:01.553340  876818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:32:01.553902  876818 out.go:368] Setting JSON to false
	I1217 08:32:01.555210  876818 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8067,"bootTime":1765952255,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:32:01.555284  876818 start.go:143] virtualization: kvm guest
	I1217 08:32:01.558242  876818 out.go:179] * [default-k8s-diff-port-225657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:32:01.561313  876818 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:32:01.561325  876818 notify.go:221] Checking for updates...
	I1217 08:32:01.568510  876818 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:32:01.571884  876818 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:01.574245  876818 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:32:01.576734  876818 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:32:01.578873  876818 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:32:01.581914  876818 config.go:182] Loaded profile config "embed-certs-581631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:01.582052  876818 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:32:01.582137  876818 config.go:182] Loaded profile config "old-k8s-version-640910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 08:32:01.582248  876818 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:32:01.612172  876818 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:32:01.612311  876818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:32:01.684785  876818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-17 08:32:01.672949118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:32:01.684957  876818 docker.go:319] overlay module found
	I1217 08:32:01.687104  876818 out.go:179] * Using the docker driver based on user configuration
	I1217 08:32:01.688739  876818 start.go:309] selected driver: docker
	I1217 08:32:01.688762  876818 start.go:927] validating driver "docker" against <nil>
	I1217 08:32:01.688779  876818 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:32:01.689371  876818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:32:01.761694  876818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-17 08:32:01.749436813 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:32:01.761852  876818 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 08:32:01.762082  876818 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:01.764368  876818 out.go:179] * Using Docker driver with root privileges
	I1217 08:32:01.766035  876818 cni.go:84] Creating CNI manager for ""
	I1217 08:32:01.766129  876818 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:01.766145  876818 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 08:32:01.766238  876818 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:01.768170  876818 out.go:179] * Starting "default-k8s-diff-port-225657" primary control-plane node in "default-k8s-diff-port-225657" cluster
	I1217 08:32:01.769863  876818 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:32:01.772343  876818 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:32:01.774131  876818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:32:01.774188  876818 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 08:32:01.774206  876818 cache.go:65] Caching tarball of preloaded images
	I1217 08:32:01.774253  876818 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:32:01.774340  876818 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:32:01.774359  876818 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 08:32:01.774581  876818 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:32:01.774623  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json: {Name:mkdc1e498a413d8c47a4c9161b8ddc9e11834a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:01.803235  876818 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:32:01.803269  876818 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:32:01.803295  876818 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:32:01.803341  876818 start.go:360] acquireMachinesLock for default-k8s-diff-port-225657: {Name:mkf524609fef75b896bc809c6c5673b68f778ced Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:32:01.803497  876818 start.go:364] duration metric: took 133.382µs to acquireMachinesLock for "default-k8s-diff-port-225657"
	I1217 08:32:01.803569  876818 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:01.803675  876818 start.go:125] createHost starting for "" (driver="docker")
	I1217 08:31:59.471510  866074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:31:59.487104  866074 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1217 08:31:59.492193  866074 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1217 08:31:59.492241  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (58110244 bytes)
	I1217 08:32:01.990912  866074 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1217 08:32:02.003508  866074 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1217 08:32:02.003588  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (72368312 bytes)
	I1217 08:32:02.288548  866074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:32:02.298803  866074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 08:32:02.315378  866074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 08:32:02.402911  866074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 08:32:02.421212  866074 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:32:02.426364  866074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:32:02.442236  866074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:02.553459  866074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:02.590063  866074 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988 for IP: 192.168.94.2
	I1217 08:32:02.590092  866074 certs.go:195] generating shared ca certs ...
	I1217 08:32:02.590113  866074 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.590330  866074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:32:02.590413  866074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:32:02.590429  866074 certs.go:257] generating profile certs ...
	I1217 08:32:02.590514  866074 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.key
	I1217 08:32:02.590544  866074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.crt with IP's: []
	I1217 08:32:02.636814  866074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.crt ...
	I1217 08:32:02.636860  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.crt: {Name:mkc8d6c44408b047376e6be421e3c93768af7dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.637104  866074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.key ...
	I1217 08:32:02.637126  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/client.key: {Name:mk23aabb5dd35dc4380024377e6eece268d19273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.637255  866074 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be
	I1217 08:32:02.637279  866074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 08:31:57.930133  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:58.430261  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:58.930566  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:59.429668  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:31:59.929814  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:00.430337  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:00.930517  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.430253  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.929494  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.430181  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.930157  860032 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.041132  860032 kubeadm.go:1114] duration metric: took 12.777197998s to wait for elevateKubeSystemPrivileges
	I1217 08:32:03.041172  860032 kubeadm.go:403] duration metric: took 25.06139908s to StartCluster
	I1217 08:32:03.041194  860032 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.041275  860032 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:03.042238  860032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.042571  860032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:03.042571  860032 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:03.042772  860032 config.go:182] Loaded profile config "old-k8s-version-640910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 08:32:03.042598  860032 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:03.042829  860032 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-640910"
	I1217 08:32:03.042846  860032 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-640910"
	I1217 08:32:03.042873  860032 host.go:66] Checking if "old-k8s-version-640910" exists ...
	I1217 08:32:03.043189  860032 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-640910"
	I1217 08:32:03.043227  860032 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-640910"
	I1217 08:32:03.043387  860032 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:32:03.043604  860032 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:32:03.044941  860032 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:03.047619  860032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:03.077628  860032 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:03.079571  860032 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:03.079600  860032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:03.079664  860032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:03.079881  860032 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-640910"
	I1217 08:32:03.079930  860032 host.go:66] Checking if "old-k8s-version-640910" exists ...
	I1217 08:32:03.080421  860032 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:32:03.115572  860032 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:03.115595  860032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:03.115604  860032 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:03.115657  860032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:03.149311  860032 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:03.198402  860032 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:03.247949  860032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:03.263689  860032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:03.280464  860032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:03.580999  860032 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:03.582028  860032 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-640910" to be "Ready" ...
	I1217 08:32:03.834067  860032 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 08:32:00.400233  866708 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:32:00.406292  866708 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 08:32:00.406321  866708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:32:00.424039  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:32:00.743784  866708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:32:00.743917  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:00.743934  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-581631 minikube.k8s.io/updated_at=2025_12_17T08_32_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=embed-certs-581631 minikube.k8s.io/primary=true
	I1217 08:32:00.845521  866708 ops.go:34] apiserver oom_adj: -16
	I1217 08:32:00.845595  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.345810  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:01.845712  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.345788  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:02.846718  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.345718  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.845894  866708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:03.935808  866708 kubeadm.go:1114] duration metric: took 3.191972569s to wait for elevateKubeSystemPrivileges
	I1217 08:32:03.935854  866708 kubeadm.go:403] duration metric: took 16.523773394s to StartCluster
	I1217 08:32:03.935872  866708 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.935942  866708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:03.937291  866708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:03.937548  866708 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:03.937670  866708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:03.937680  866708 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:03.937783  866708 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-581631"
	I1217 08:32:03.937801  866708 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-581631"
	I1217 08:32:03.937806  866708 addons.go:70] Setting default-storageclass=true in profile "embed-certs-581631"
	I1217 08:32:03.937828  866708 config.go:182] Loaded profile config "embed-certs-581631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:03.937836  866708 host.go:66] Checking if "embed-certs-581631" exists ...
	I1217 08:32:03.937842  866708 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-581631"
	I1217 08:32:03.938130  866708 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:03.938357  866708 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:03.941811  866708 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:03.943970  866708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:03.964732  866708 addons.go:239] Setting addon default-storageclass=true in "embed-certs-581631"
	I1217 08:32:03.964785  866708 host.go:66] Checking if "embed-certs-581631" exists ...
	I1217 08:32:03.965299  866708 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:03.969098  866708 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:03.970610  866708 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:03.970635  866708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:03.970704  866708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:03.995425  866708 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:03.995462  866708 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:03.995547  866708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:04.006698  866708 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:32:04.031285  866708 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:32:04.065134  866708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:02.824596  866074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be ...
	I1217 08:32:02.824631  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be: {Name:mk45976aa0955a0afc1e8d64278dff519aaa2454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.824859  866074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be ...
	I1217 08:32:02.824886  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be: {Name:mk2dae5a961985112e8e9209c523ebf3ce607cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.825034  866074 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt.abd6f3be -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt
	I1217 08:32:02.825138  866074 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key.abd6f3be -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key
	I1217 08:32:02.825220  866074 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key
	I1217 08:32:02.825243  866074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt with IP's: []
	I1217 08:32:02.924760  866074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt ...
	I1217 08:32:02.924794  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt: {Name:mk267cedf76a400096972e8a1d55b0ea70195e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.925012  866074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key ...
	I1217 08:32:02.925034  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key: {Name:mkcad6ea1b15d8213d3a172ca1538446ff01dcfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:02.925290  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:32:02.925355  866074 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:32:02.925366  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:32:02.925400  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:32:02.925435  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:32:02.925467  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:32:02.925552  866074 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:02.926601  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:32:02.955081  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:32:02.999049  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:32:03.023623  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:32:03.048610  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 08:32:03.080188  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:32:03.113501  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:32:03.144381  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/no-preload-936988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:32:03.177658  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:32:03.213764  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:32:03.241889  866074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:32:03.273169  866074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:32:03.300818  866074 ssh_runner.go:195] Run: openssl version
	I1217 08:32:03.311109  866074 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.324197  866074 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:32:03.346933  866074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.353468  866074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.353573  866074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:32:03.417221  866074 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:03.429614  866074 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5560552.pem /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:03.442003  866074 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.455221  866074 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:32:03.469042  866074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.477276  866074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.477361  866074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:03.534695  866074 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:32:03.547563  866074 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 08:32:03.557640  866074 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.571257  866074 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:32:03.585135  866074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.591025  866074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.591099  866074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:32:03.646989  866074 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:32:03.662879  866074 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/556055.pem /etc/ssl/certs/51391683.0
	I1217 08:32:03.679566  866074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:32:03.687395  866074 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 08:32:03.687462  866074 kubeadm.go:401] StartCluster: {Name:no-preload-936988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-936988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:03.687578  866074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:32:03.687637  866074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:32:03.727391  866074 cri.go:89] found id: ""
	I1217 08:32:03.727501  866074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:32:03.738224  866074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 08:32:03.748723  866074 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 08:32:03.748793  866074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 08:32:03.760841  866074 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 08:32:03.760866  866074 kubeadm.go:158] found existing configuration files:
	
	I1217 08:32:03.760920  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 08:32:03.772427  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 08:32:03.772500  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 08:32:03.783020  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 08:32:03.794743  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 08:32:03.794817  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 08:32:03.805322  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 08:32:03.817490  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 08:32:03.817564  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 08:32:03.831785  866074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 08:32:03.843542  866074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 08:32:03.843616  866074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 08:32:03.853047  866074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 08:32:03.899091  866074 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 08:32:03.899195  866074 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 08:32:04.004708  866074 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 08:32:04.004803  866074 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 08:32:04.004848  866074 kubeadm.go:319] OS: Linux
	I1217 08:32:04.004909  866074 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 08:32:04.004973  866074 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 08:32:04.005038  866074 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 08:32:04.006028  866074 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 08:32:04.006112  866074 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 08:32:04.006175  866074 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 08:32:04.006240  866074 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 08:32:04.006308  866074 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 08:32:04.119188  866074 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 08:32:04.119332  866074 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 08:32:04.119474  866074 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 08:32:04.144669  866074 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 08:32:04.132626  866708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:04.136786  866708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:04.152680  866708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:04.291998  866708 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:04.293620  866708 node_ready.go:35] waiting up to 6m0s for node "embed-certs-581631" to be "Ready" ...
	I1217 08:32:04.514479  866708 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 08:32:04.149161  866074 out.go:252]   - Generating certificates and keys ...
	I1217 08:32:04.149271  866074 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 08:32:04.149357  866074 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 08:32:04.345146  866074 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 08:32:04.456420  866074 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 08:32:04.569867  866074 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 08:32:04.769981  866074 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 08:32:04.962017  866074 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 08:32:04.962211  866074 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-936988] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 08:32:05.189992  866074 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 08:32:05.190862  866074 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-936988] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 08:32:05.314135  866074 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 08:32:05.436298  866074 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 08:32:05.639248  866074 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 08:32:05.639451  866074 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 08:32:05.799909  866074 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 08:32:05.903137  866074 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 08:32:06.294633  866074 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 08:32:06.421349  866074 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 08:32:06.498721  866074 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 08:32:06.499367  866074 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 08:32:06.544114  866074 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 08:32:01.806337  876818 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 08:32:01.806789  876818 start.go:159] libmachine.API.Create for "default-k8s-diff-port-225657" (driver="docker")
	I1217 08:32:01.806841  876818 client.go:173] LocalClient.Create starting
	I1217 08:32:01.806928  876818 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 08:32:01.806973  876818 main.go:143] libmachine: Decoding PEM data...
	I1217 08:32:01.807004  876818 main.go:143] libmachine: Parsing certificate...
	I1217 08:32:01.807100  876818 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 08:32:01.807134  876818 main.go:143] libmachine: Decoding PEM data...
	I1217 08:32:01.807156  876818 main.go:143] libmachine: Parsing certificate...
	I1217 08:32:01.807598  876818 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 08:32:01.828194  876818 cli_runner.go:211] docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 08:32:01.828308  876818 network_create.go:284] running [docker network inspect default-k8s-diff-port-225657] to gather additional debugging logs...
	I1217 08:32:01.828345  876818 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657
	W1217 08:32:01.849757  876818 cli_runner.go:211] docker network inspect default-k8s-diff-port-225657 returned with exit code 1
	I1217 08:32:01.849798  876818 network_create.go:287] error running [docker network inspect default-k8s-diff-port-225657]: docker network inspect default-k8s-diff-port-225657: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-225657 not found
	I1217 08:32:01.849822  876818 network_create.go:289] output of [docker network inspect default-k8s-diff-port-225657]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-225657 not found
	
	** /stderr **
	I1217 08:32:01.849945  876818 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:32:01.874361  876818 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-971513c2879b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:b9:48:a1:bc:14} reservation:<nil>}
	I1217 08:32:01.875036  876818 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d3a8438f2b04 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:22:9a:90:c8:31} reservation:<nil>}
	I1217 08:32:01.875878  876818 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-270f10fabfc5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:f8:c6:e8:84:c2} reservation:<nil>}
	I1217 08:32:01.876831  876818 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e1180462b720 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c6:c6:ea:2d:3c:aa} reservation:<nil>}
	I1217 08:32:01.877453  876818 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b355f632d1e4 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:2c:e1:34:c1:34} reservation:<nil>}
	I1217 08:32:01.878105  876818 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-31552e72b7c3 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:c2:be:20:58:f7:57} reservation:<nil>}
	I1217 08:32:01.879300  876818 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020fbec0}
	I1217 08:32:01.879341  876818 network_create.go:124] attempt to create docker network default-k8s-diff-port-225657 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1217 08:32:01.879423  876818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 default-k8s-diff-port-225657
	I1217 08:32:01.960561  876818 network_create.go:108] docker network default-k8s-diff-port-225657 192.168.103.0/24 created
	I1217 08:32:01.960599  876818 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-225657" container
	I1217 08:32:01.960690  876818 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 08:32:01.985847  876818 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-225657 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --label created_by.minikube.sigs.k8s.io=true
	I1217 08:32:02.021946  876818 oci.go:103] Successfully created a docker volume default-k8s-diff-port-225657
	I1217 08:32:02.022045  876818 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-225657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --entrypoint /usr/bin/test -v default-k8s-diff-port-225657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 08:32:02.718937  876818 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-225657
	I1217 08:32:02.719022  876818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:32:02.719035  876818 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 08:32:02.719125  876818 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-225657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 08:32:06.616828  866074 out.go:252]   - Booting up control plane ...
	I1217 08:32:06.617019  866074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 08:32:06.617189  866074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 08:32:06.617313  866074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 08:32:06.617525  866074 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 08:32:06.617836  866074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 08:32:06.618011  866074 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 08:32:06.618170  866074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 08:32:06.618229  866074 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 08:32:06.755893  866074 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 08:32:06.756060  866074 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 08:32:07.756781  866074 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001037016s
	I1217 08:32:07.760245  866074 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 08:32:07.760395  866074 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1217 08:32:07.760553  866074 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 08:32:07.760691  866074 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 08:32:03.835993  860032 addons.go:530] duration metric: took 793.383913ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:32:04.092564  860032 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-640910" context rescaled to 1 replicas
	W1217 08:32:05.590407  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	I1217 08:32:04.516501  866708 addons.go:530] duration metric: took 578.822881ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:32:04.798269  866708 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-581631" context rescaled to 1 replicas
	W1217 08:32:06.297588  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	W1217 08:32:08.297713  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:07.917031  876818 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-225657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (5.197831615s)
	I1217 08:32:07.917065  876818 kic.go:203] duration metric: took 5.198025236s to extract preloaded images to volume ...
	W1217 08:32:07.917162  876818 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 08:32:07.917207  876818 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 08:32:07.917258  876818 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 08:32:07.988840  876818 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-225657 --name default-k8s-diff-port-225657 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-225657 --network default-k8s-diff-port-225657 --ip 192.168.103.2 --volume default-k8s-diff-port-225657:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 08:32:08.400242  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Running}}
	I1217 08:32:08.424855  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:08.450896  876818 cli_runner.go:164] Run: docker exec default-k8s-diff-port-225657 stat /var/lib/dpkg/alternatives/iptables
	I1217 08:32:08.523021  876818 oci.go:144] the created container "default-k8s-diff-port-225657" has a running status.
	I1217 08:32:08.523088  876818 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519...
	I1217 08:32:08.525770  876818 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 08:32:08.560942  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:08.585171  876818 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 08:32:08.585195  876818 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-225657 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 08:32:08.651792  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:08.677101  876818 machine.go:94] provisionDockerMachine start ...
	I1217 08:32:08.677481  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:08.707459  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:08.707676  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:08.707703  876818 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:32:08.708734  876818 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57932->127.0.0.1:33510: read: connection reset by peer
	I1217 08:32:08.765985  866074 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005397586s
	I1217 08:32:09.803829  866074 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.0435496s
	I1217 08:32:11.763049  866074 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002842384s
	I1217 08:32:11.782622  866074 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 08:32:11.796483  866074 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 08:32:11.808857  866074 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 08:32:11.809099  866074 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-936988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 08:32:11.818163  866074 kubeadm.go:319] [bootstrap-token] Using token: 7nqi1p.ejost2d3dqegwn4g
	I1217 08:32:11.819926  866074 out.go:252]   - Configuring RBAC rules ...
	I1217 08:32:11.820101  866074 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 08:32:11.823946  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 08:32:11.832263  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 08:32:11.835817  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 08:32:11.838856  866074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 08:32:11.842484  866074 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 08:32:12.169848  866074 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 08:32:12.589615  866074 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	W1217 08:32:08.088510  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	W1217 08:32:10.585565  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	W1217 08:32:12.585741  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	I1217 08:32:13.169675  866074 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 08:32:13.171934  866074 kubeadm.go:319] 
	I1217 08:32:13.172034  866074 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 08:32:13.172045  866074 kubeadm.go:319] 
	I1217 08:32:13.172161  866074 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 08:32:13.172172  866074 kubeadm.go:319] 
	I1217 08:32:13.172200  866074 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 08:32:13.172277  866074 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 08:32:13.172344  866074 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 08:32:13.172355  866074 kubeadm.go:319] 
	I1217 08:32:13.172415  866074 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 08:32:13.172425  866074 kubeadm.go:319] 
	I1217 08:32:13.172481  866074 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 08:32:13.172494  866074 kubeadm.go:319] 
	I1217 08:32:13.172584  866074 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 08:32:13.172726  866074 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 08:32:13.172821  866074 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 08:32:13.172831  866074 kubeadm.go:319] 
	I1217 08:32:13.172934  866074 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 08:32:13.173027  866074 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 08:32:13.173036  866074 kubeadm.go:319] 
	I1217 08:32:13.173135  866074 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7nqi1p.ejost2d3dqegwn4g \
	I1217 08:32:13.173265  866074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 08:32:13.173294  866074 kubeadm.go:319] 	--control-plane 
	I1217 08:32:13.173303  866074 kubeadm.go:319] 
	I1217 08:32:13.173408  866074 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 08:32:13.173418  866074 kubeadm.go:319] 
	I1217 08:32:13.173517  866074 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7nqi1p.ejost2d3dqegwn4g \
	I1217 08:32:13.173666  866074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 08:32:13.177005  866074 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 08:32:13.177121  866074 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 08:32:13.177175  866074 cni.go:84] Creating CNI manager for ""
	I1217 08:32:13.177196  866074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:13.179613  866074 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1217 08:32:10.797040  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	W1217 08:32:13.297857  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:11.847591  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:32:11.847629  876818 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-225657"
	I1217 08:32:11.847703  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:11.870068  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:11.870172  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:11.870184  876818 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-225657 && echo "default-k8s-diff-port-225657" | sudo tee /etc/hostname
	I1217 08:32:12.017902  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:32:12.017995  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.040970  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:12.041124  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:12.041148  876818 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-225657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-225657/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-225657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:32:12.174812  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:32:12.174846  876818 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:32:12.174878  876818 ubuntu.go:190] setting up certificates
	I1217 08:32:12.174891  876818 provision.go:84] configureAuth start
	I1217 08:32:12.174961  876818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:32:12.195929  876818 provision.go:143] copyHostCerts
	I1217 08:32:12.196007  876818 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:32:12.196020  876818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:32:12.196106  876818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:32:12.196259  876818 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:32:12.196274  876818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:32:12.196320  876818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:32:12.196402  876818 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:32:12.196413  876818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:32:12.196438  876818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:32:12.196495  876818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-225657 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-225657 localhost minikube]
	I1217 08:32:12.298236  876818 provision.go:177] copyRemoteCerts
	I1217 08:32:12.298295  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:32:12.298335  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.318951  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:12.424332  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:32:12.450112  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 08:32:12.470525  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 08:32:12.491813  876818 provision.go:87] duration metric: took 316.905148ms to configureAuth
	I1217 08:32:12.491849  876818 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:32:12.492046  876818 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:12.492151  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.513001  876818 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:12.513125  876818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1217 08:32:12.513141  876818 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:32:12.803327  876818 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:32:12.803363  876818 machine.go:97] duration metric: took 4.126112041s to provisionDockerMachine
	I1217 08:32:12.803378  876818 client.go:176] duration metric: took 10.996527369s to LocalClient.Create
	I1217 08:32:12.803405  876818 start.go:167] duration metric: took 10.99661651s to libmachine.API.Create "default-k8s-diff-port-225657"
	I1217 08:32:12.803414  876818 start.go:293] postStartSetup for "default-k8s-diff-port-225657" (driver="docker")
	I1217 08:32:12.803428  876818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:32:12.803520  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:32:12.803590  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:12.822159  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:12.925471  876818 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:32:12.929675  876818 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:32:12.929714  876818 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:32:12.929734  876818 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:32:12.929814  876818 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:32:12.929919  876818 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:32:12.930052  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:32:12.938904  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:12.961125  876818 start.go:296] duration metric: took 157.693442ms for postStartSetup
	I1217 08:32:12.961555  876818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:32:12.982070  876818 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:32:12.982402  876818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:32:12.982460  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:13.002877  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:13.095087  876818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:32:13.100174  876818 start.go:128] duration metric: took 11.296476774s to createHost
	I1217 08:32:13.100209  876818 start.go:83] releasing machines lock for "default-k8s-diff-port-225657", held for 11.296696714s
	I1217 08:32:13.100279  876818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:32:13.119195  876818 ssh_runner.go:195] Run: cat /version.json
	I1217 08:32:13.119271  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:13.119274  876818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:32:13.119342  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:13.139794  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:13.140091  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:13.292825  876818 ssh_runner.go:195] Run: systemctl --version
	I1217 08:32:13.301062  876818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:32:13.347657  876818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:32:13.353086  876818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:32:13.353180  876818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:32:13.386293  876818 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 08:32:13.386324  876818 start.go:496] detecting cgroup driver to use...
	I1217 08:32:13.386363  876818 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:32:13.386440  876818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:32:13.406165  876818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:32:13.421667  876818 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:32:13.421735  876818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:32:13.445063  876818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:32:13.474069  876818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:32:13.589514  876818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:32:13.687876  876818 docker.go:234] disabling docker service ...
	I1217 08:32:13.687948  876818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:32:13.709115  876818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:32:13.725179  876818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:32:13.816070  876818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:32:13.908965  876818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:32:13.922931  876818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:32:13.938488  876818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:32:13.938601  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.949886  876818 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:32:13.949966  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.959623  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.969563  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.980342  876818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:32:13.989685  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:13.999720  876818 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:14.014863  876818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:14.024968  876818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:32:14.033477  876818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:32:14.041958  876818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:14.130836  876818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:32:14.324161  876818 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:32:14.324240  876818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:32:14.328783  876818 start.go:564] Will wait 60s for crictl version
	I1217 08:32:14.328842  876818 ssh_runner.go:195] Run: which crictl
	I1217 08:32:14.332732  876818 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:32:14.358741  876818 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:32:14.358828  876818 ssh_runner.go:195] Run: crio --version
	I1217 08:32:14.389865  876818 ssh_runner.go:195] Run: crio --version
	I1217 08:32:14.421345  876818 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 08:32:14.423125  876818 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:32:14.443156  876818 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 08:32:14.448782  876818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:32:14.461614  876818 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:32:14.461796  876818 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:32:14.461847  876818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:32:14.497773  876818 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:32:14.497797  876818 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:32:14.497850  876818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:32:14.528137  876818 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:32:14.528160  876818 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:32:14.528168  876818 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1217 08:32:14.528254  876818 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-225657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:32:14.528318  876818 ssh_runner.go:195] Run: crio config
	I1217 08:32:14.584472  876818 cni.go:84] Creating CNI manager for ""
	I1217 08:32:14.584502  876818 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:14.584524  876818 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:32:14.584583  876818 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-225657 NodeName:default-k8s-diff-port-225657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:32:14.584763  876818 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-225657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:32:14.584847  876818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 08:32:14.594854  876818 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:32:14.594919  876818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:32:14.605478  876818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1217 08:32:14.621822  876818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:32:14.641660  876818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 08:32:14.656519  876818 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:32:14.660626  876818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:32:14.672783  876818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:14.763211  876818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:14.795518  876818 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657 for IP: 192.168.103.2
	I1217 08:32:14.795574  876818 certs.go:195] generating shared ca certs ...
	I1217 08:32:14.795596  876818 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.795767  876818 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:32:14.795826  876818 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:32:14.795840  876818 certs.go:257] generating profile certs ...
	I1217 08:32:14.795954  876818 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key
	I1217 08:32:14.795977  876818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.crt with IP's: []
	I1217 08:32:14.863228  876818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.crt ...
	I1217 08:32:14.863262  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.crt: {Name:mkdcfa20690e66f7711fa7eedb1c17f0013cea3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.863459  876818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key ...
	I1217 08:32:14.863479  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key: {Name:mk0c147f99dbcd9cd0b76dd50dbcc7358fb09eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.863633  876818 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92
	I1217 08:32:14.863658  876818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 08:32:14.926506  876818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92 ...
	I1217 08:32:14.926559  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92: {Name:mkeab2e9787f4fdc822d05ef2a5a31d73807e7a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.926783  876818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92 ...
	I1217 08:32:14.926807  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92: {Name:mk908b8eefd79aa9fd3e47b0e9dd700056cd3a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:14.926928  876818 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt.632bab92 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt
	I1217 08:32:14.927054  876818 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key
	I1217 08:32:14.927150  876818 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key
	I1217 08:32:14.927179  876818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt with IP's: []
	I1217 08:32:14.999838  876818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt ...
	I1217 08:32:14.999868  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt: {Name:mk1fe736b631b3578e9134ad8e647a4ce10e1dfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:15.000043  876818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key ...
	I1217 08:32:15.000057  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key: {Name:mkff008ec12026d35b6afe310c5ec1f253ee363c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:15.000226  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:32:15.000264  876818 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:32:15.000274  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:32:15.000297  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:32:15.000320  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:32:15.000412  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:32:15.000466  876818 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:15.001158  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:32:15.022100  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:32:15.041157  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:32:15.060148  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:32:15.083276  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 08:32:15.106435  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:32:15.130968  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:32:15.151120  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 08:32:15.171738  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:32:15.196153  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:32:15.216671  876818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:32:15.237947  876818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:32:15.252432  876818 ssh_runner.go:195] Run: openssl version
	I1217 08:32:15.259775  876818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.268463  876818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:32:15.277889  876818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.282785  876818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.282840  876818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:32:15.321178  876818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:15.332457  876818 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5560552.pem /etc/ssl/certs/3ec20f2e.0
	I1217 08:32:15.342212  876818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.350940  876818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:32:15.359554  876818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.363904  876818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.363983  876818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:32:15.405120  876818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:32:15.413813  876818 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 08:32:15.422721  876818 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.431742  876818 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:32:15.440099  876818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.445063  876818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.445150  876818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:32:15.485945  876818 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:32:15.495194  876818 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/556055.pem /etc/ssl/certs/51391683.0
	I1217 08:32:15.504455  876818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:32:15.509009  876818 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 08:32:15.509078  876818 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:15.509159  876818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:32:15.509212  876818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:32:15.542000  876818 cri.go:89] found id: ""
	I1217 08:32:15.542078  876818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:32:15.551359  876818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 08:32:15.561709  876818 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 08:32:15.561782  876818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 08:32:15.571765  876818 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 08:32:15.571803  876818 kubeadm.go:158] found existing configuration files:
	
	I1217 08:32:15.571859  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1217 08:32:15.582295  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 08:32:15.582353  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 08:32:15.591491  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1217 08:32:15.600423  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 08:32:15.600486  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 08:32:15.609303  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1217 08:32:15.618950  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 08:32:15.619020  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 08:32:15.628515  876818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1217 08:32:15.637988  876818 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 08:32:15.638046  876818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 08:32:15.646553  876818 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 08:32:15.711259  876818 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 08:32:15.775493  876818 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 08:32:13.181565  866074 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:32:13.186365  866074 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1217 08:32:13.186392  866074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:32:13.200333  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:32:13.442509  866074 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:32:13.442697  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:13.442819  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-936988 minikube.k8s.io/updated_at=2025_12_17T08_32_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=no-preload-936988 minikube.k8s.io/primary=true
	I1217 08:32:13.461848  866074 ops.go:34] apiserver oom_adj: -16
	I1217 08:32:13.552148  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:14.053185  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:14.552761  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:15.052803  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:15.552460  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:16.052244  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:16.552431  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:17.052268  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:17.552825  866074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:17.630473  866074 kubeadm.go:1114] duration metric: took 4.187821582s to wait for elevateKubeSystemPrivileges
	I1217 08:32:17.630512  866074 kubeadm.go:403] duration metric: took 13.943055923s to StartCluster
	I1217 08:32:17.630550  866074 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:17.630631  866074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:17.632218  866074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:17.632577  866074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:17.632602  866074 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:17.632683  866074 addons.go:70] Setting storage-provisioner=true in profile "no-preload-936988"
	I1217 08:32:17.632702  866074 addons.go:239] Setting addon storage-provisioner=true in "no-preload-936988"
	I1217 08:32:17.632731  866074 host.go:66] Checking if "no-preload-936988" exists ...
	I1217 08:32:17.632780  866074 addons.go:70] Setting default-storageclass=true in profile "no-preload-936988"
	I1217 08:32:17.632811  866074 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-936988"
	I1217 08:32:17.633099  866074 cli_runner.go:164] Run: docker container inspect no-preload-936988 --format={{.State.Status}}
	I1217 08:32:17.632569  866074 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:17.633241  866074 cli_runner.go:164] Run: docker container inspect no-preload-936988 --format={{.State.Status}}
	I1217 08:32:17.633548  866074 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:32:17.634963  866074 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:17.640101  866074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:17.668302  866074 addons.go:239] Setting addon default-storageclass=true in "no-preload-936988"
	I1217 08:32:17.668358  866074 host.go:66] Checking if "no-preload-936988" exists ...
	I1217 08:32:17.668491  866074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:17.668875  866074 cli_runner.go:164] Run: docker container inspect no-preload-936988 --format={{.State.Status}}
	I1217 08:32:17.670105  866074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:17.670126  866074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:17.670199  866074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-936988
	I1217 08:32:17.704878  866074 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/no-preload-936988/id_ed25519 Username:docker}
	I1217 08:32:17.708635  866074 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:17.708697  866074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:17.708784  866074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-936988
	I1217 08:32:17.736140  866074 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/no-preload-936988/id_ed25519 Username:docker}
	I1217 08:32:17.758941  866074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:17.809686  866074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1217 08:32:14.586794  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	W1217 08:32:16.587145  860032 node_ready.go:57] node "old-k8s-version-640910" has "Ready":"False" status (will retry)
	I1217 08:32:17.830908  866074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:17.863741  866074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:18.003143  866074 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:18.004756  866074 node_ready.go:35] waiting up to 6m0s for node "no-preload-936988" to be "Ready" ...
	I1217 08:32:18.233677  866074 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 08:32:15.797222  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	W1217 08:32:17.797847  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:19.088257  860032 node_ready.go:49] node "old-k8s-version-640910" is "Ready"
	I1217 08:32:19.088298  860032 node_ready.go:38] duration metric: took 15.506235047s for node "old-k8s-version-640910" to be "Ready" ...
	I1217 08:32:19.088315  860032 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:32:19.088364  860032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:32:19.106742  860032 api_server.go:72] duration metric: took 16.064061733s to wait for apiserver process to appear ...
	I1217 08:32:19.106778  860032 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:32:19.106802  860032 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 08:32:19.113046  860032 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 08:32:19.114637  860032 api_server.go:141] control plane version: v1.28.0
	I1217 08:32:19.114666  860032 api_server.go:131] duration metric: took 7.880626ms to wait for apiserver health ...
	I1217 08:32:19.114680  860032 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:32:19.120583  860032 system_pods.go:59] 8 kube-system pods found
	I1217 08:32:19.120638  860032 system_pods.go:61] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:19.120654  860032 system_pods.go:61] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.120662  860032 system_pods.go:61] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.120680  860032 system_pods.go:61] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.120695  860032 system_pods.go:61] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.120700  860032 system_pods.go:61] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.120706  860032 system_pods.go:61] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.120714  860032 system_pods.go:61] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:19.120729  860032 system_pods.go:74] duration metric: took 6.039419ms to wait for pod list to return data ...
	I1217 08:32:19.120746  860032 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:32:19.123936  860032 default_sa.go:45] found service account: "default"
	I1217 08:32:19.123970  860032 default_sa.go:55] duration metric: took 3.215131ms for default service account to be created ...
	I1217 08:32:19.124052  860032 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:32:19.129828  860032 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:19.129937  860032 system_pods.go:89] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:19.129949  860032 system_pods.go:89] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.129960  860032 system_pods.go:89] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.129965  860032 system_pods.go:89] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.129971  860032 system_pods.go:89] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.129976  860032 system_pods.go:89] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.129980  860032 system_pods.go:89] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.129987  860032 system_pods.go:89] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:19.130015  860032 retry.go:31] will retry after 193.985772ms: missing components: kube-dns
	I1217 08:32:19.330692  860032 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:19.330740  860032 system_pods.go:89] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:19.330752  860032 system_pods.go:89] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.330761  860032 system_pods.go:89] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.330767  860032 system_pods.go:89] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.330772  860032 system_pods.go:89] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.330777  860032 system_pods.go:89] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.330780  860032 system_pods.go:89] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.330784  860032 system_pods.go:89] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:19.330808  860032 retry.go:31] will retry after 264.53787ms: missing components: kube-dns
	I1217 08:32:19.602757  860032 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:19.602794  860032 system_pods.go:89] "coredns-5dd5756b68-mr99d" [14d0e140-912f-42bd-a799-4db74ca65844] Running
	I1217 08:32:19.602803  860032 system_pods.go:89] "etcd-old-k8s-version-640910" [14a09dff-428c-46d1-8e08-aec6f6daadf8] Running
	I1217 08:32:19.602808  860032 system_pods.go:89] "kindnet-x9g6n" [59d4e46e-e40e-41fe-af7d-613f48f08315] Running
	I1217 08:32:19.602813  860032 system_pods.go:89] "kube-apiserver-old-k8s-version-640910" [78a97a46-ebbb-433e-a5aa-fed2c98c7bfe] Running
	I1217 08:32:19.602818  860032 system_pods.go:89] "kube-controller-manager-old-k8s-version-640910" [e70177eb-50b3-4bcc-93c7-b5da7b7c9a58] Running
	I1217 08:32:19.602823  860032 system_pods.go:89] "kube-proxy-cwfwr" [e0ce0d47-e184-464c-8ec0-4907f3ab9b41] Running
	I1217 08:32:19.602828  860032 system_pods.go:89] "kube-scheduler-old-k8s-version-640910" [b2375b0b-8cee-440a-b291-faf40799d1ea] Running
	I1217 08:32:19.602833  860032 system_pods.go:89] "storage-provisioner" [5aaae8c7-6580-4b9a-8d54-442a96236756] Running
	I1217 08:32:19.602844  860032 system_pods.go:126] duration metric: took 478.778338ms to wait for k8s-apps to be running ...
	I1217 08:32:19.602855  860032 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:32:19.602919  860032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:32:19.623074  860032 system_svc.go:56] duration metric: took 20.20768ms WaitForService to wait for kubelet
	I1217 08:32:19.623106  860032 kubeadm.go:587] duration metric: took 16.580433192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:19.623129  860032 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:32:19.626994  860032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:32:19.627046  860032 node_conditions.go:123] node cpu capacity is 8
	I1217 08:32:19.627099  860032 node_conditions.go:105] duration metric: took 3.935608ms to run NodePressure ...
	I1217 08:32:19.627120  860032 start.go:242] waiting for startup goroutines ...
	I1217 08:32:19.627130  860032 start.go:247] waiting for cluster config update ...
	I1217 08:32:19.627144  860032 start.go:256] writing updated cluster config ...
	I1217 08:32:19.627664  860032 ssh_runner.go:195] Run: rm -f paused
	I1217 08:32:19.633357  860032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:19.639945  860032 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mr99d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.647463  860032 pod_ready.go:94] pod "coredns-5dd5756b68-mr99d" is "Ready"
	I1217 08:32:19.647499  860032 pod_ready.go:86] duration metric: took 7.52479ms for pod "coredns-5dd5756b68-mr99d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.652349  860032 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.659423  860032 pod_ready.go:94] pod "etcd-old-k8s-version-640910" is "Ready"
	I1217 08:32:19.659461  860032 pod_ready.go:86] duration metric: took 7.072786ms for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.663294  860032 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.669941  860032 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-640910" is "Ready"
	I1217 08:32:19.669976  860032 pod_ready.go:86] duration metric: took 6.648805ms for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:19.673979  860032 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.042200  860032 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-640910" is "Ready"
	I1217 08:32:20.042232  860032 pod_ready.go:86] duration metric: took 368.226903ms for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.239616  860032 pod_ready.go:83] waiting for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.638112  860032 pod_ready.go:94] pod "kube-proxy-cwfwr" is "Ready"
	I1217 08:32:20.638140  860032 pod_ready.go:86] duration metric: took 398.494834ms for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:20.840026  860032 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.239099  860032 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-640910" is "Ready"
	I1217 08:32:21.239132  860032 pod_ready.go:86] duration metric: took 399.059167ms for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.239147  860032 pod_ready.go:40] duration metric: took 1.605741174s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:21.285586  860032 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 08:32:21.338149  860032 out.go:203] 
	W1217 08:32:21.340341  860032 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 08:32:21.342018  860032 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 08:32:21.345623  860032 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-640910" cluster and "default" namespace by default
	W1217 08:32:21.349107  860032 root.go:91] failed to log command end to audit: failed to find a log row with id equals to dd79e1a3-c046-43f1-a071-2f0a5a4d6a1b
	I1217 08:32:18.235063  866074 addons.go:530] duration metric: took 602.463723ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:32:18.509742  866074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-936988" context rescaled to 1 replicas
	W1217 08:32:20.008693  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	W1217 08:32:22.509397  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	W1217 08:32:20.297227  866708 node_ready.go:57] node "embed-certs-581631" has "Ready":"False" status (will retry)
	I1217 08:32:20.796895  866708 node_ready.go:49] node "embed-certs-581631" is "Ready"
	I1217 08:32:20.796932  866708 node_ready.go:38] duration metric: took 16.503273535s for node "embed-certs-581631" to be "Ready" ...
	I1217 08:32:20.796952  866708 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:32:20.797007  866708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:32:20.811909  866708 api_server.go:72] duration metric: took 16.874314934s to wait for apiserver process to appear ...
	I1217 08:32:20.811944  866708 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:32:20.811970  866708 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:32:20.817838  866708 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 08:32:20.819086  866708 api_server.go:141] control plane version: v1.34.3
	I1217 08:32:20.819118  866708 api_server.go:131] duration metric: took 7.165561ms to wait for apiserver health ...
	I1217 08:32:20.819129  866708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:32:20.823436  866708 system_pods.go:59] 8 kube-system pods found
	I1217 08:32:20.823477  866708 system_pods.go:61] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:20.823491  866708 system_pods.go:61] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:20.823500  866708 system_pods.go:61] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:20.823506  866708 system_pods.go:61] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:20.823512  866708 system_pods.go:61] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:20.823518  866708 system_pods.go:61] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:20.823523  866708 system_pods.go:61] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:20.823540  866708 system_pods.go:61] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:20.823549  866708 system_pods.go:74] duration metric: took 4.412326ms to wait for pod list to return data ...
	I1217 08:32:20.823559  866708 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:32:20.827902  866708 default_sa.go:45] found service account: "default"
	I1217 08:32:20.827931  866708 default_sa.go:55] duration metric: took 4.364348ms for default service account to be created ...
	I1217 08:32:20.827945  866708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:32:20.924443  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:20.924498  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:20.924512  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:20.924573  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:20.924580  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:20.924586  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:20.924592  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:20.924603  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:20.924611  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:20.924654  866708 retry.go:31] will retry after 243.506417ms: missing components: kube-dns
	I1217 08:32:21.172665  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:21.172712  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:21.172718  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:21.172723  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:21.172728  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:21.172732  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:21.172735  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:21.172738  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:21.172743  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:21.172760  866708 retry.go:31] will retry after 326.410198ms: missing components: kube-dns
	I1217 08:32:21.506028  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:21.506083  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:21.506094  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:21.506101  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:21.506107  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:21.506115  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:21.506121  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:21.506126  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:21.506147  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:21.506169  866708 retry.go:31] will retry after 400.365348ms: missing components: kube-dns
	I1217 08:32:21.911225  866708 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:21.911348  866708 system_pods.go:89] "coredns-66bc5c9577-p7sqj" [62629b09-13d5-43fe-bce3-59d9c9a73f8e] Running
	I1217 08:32:21.911362  866708 system_pods.go:89] "etcd-embed-certs-581631" [27654687-d4a9-4468-8b71-59427b2ffb98] Running
	I1217 08:32:21.911368  866708 system_pods.go:89] "kindnet-wv7n7" [6c471e5c-829a-440a-b333-19c2b3695fa8] Running
	I1217 08:32:21.911373  866708 system_pods.go:89] "kube-apiserver-embed-certs-581631" [16895e77-1635-4604-a1e4-a3c3a50ce325] Running
	I1217 08:32:21.911381  866708 system_pods.go:89] "kube-controller-manager-embed-certs-581631" [04f86da2-7734-4b78-8dbe-7fe7a2f63410] Running
	I1217 08:32:21.911386  866708 system_pods.go:89] "kube-proxy-7z26t" [95fe0d1a-5b68-4884-8189-458f93ba38e2] Running
	I1217 08:32:21.911392  866708 system_pods.go:89] "kube-scheduler-embed-certs-581631" [f914abe2-c51b-4f3c-b6a7-561cfde10fb5] Running
	I1217 08:32:21.911396  866708 system_pods.go:89] "storage-provisioner" [c0c0f2e6-244f-4a01-9660-dcbec60b1c1f] Running
	I1217 08:32:21.911415  866708 system_pods.go:126] duration metric: took 1.083462108s to wait for k8s-apps to be running ...
	I1217 08:32:21.911427  866708 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:32:21.911486  866708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:32:21.926549  866708 system_svc.go:56] duration metric: took 15.103695ms WaitForService to wait for kubelet
	I1217 08:32:21.926585  866708 kubeadm.go:587] duration metric: took 17.988996239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:21.926608  866708 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:32:21.929905  866708 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:32:21.929939  866708 node_conditions.go:123] node cpu capacity is 8
	I1217 08:32:21.929959  866708 node_conditions.go:105] duration metric: took 3.345146ms to run NodePressure ...
	I1217 08:32:21.929987  866708 start.go:242] waiting for startup goroutines ...
	I1217 08:32:21.929998  866708 start.go:247] waiting for cluster config update ...
	I1217 08:32:21.930013  866708 start.go:256] writing updated cluster config ...
	I1217 08:32:21.930341  866708 ssh_runner.go:195] Run: rm -f paused
	I1217 08:32:21.935015  866708 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:21.939503  866708 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p7sqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.944458  866708 pod_ready.go:94] pod "coredns-66bc5c9577-p7sqj" is "Ready"
	I1217 08:32:21.944489  866708 pod_ready.go:86] duration metric: took 4.957519ms for pod "coredns-66bc5c9577-p7sqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.946799  866708 pod_ready.go:83] waiting for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.950990  866708 pod_ready.go:94] pod "etcd-embed-certs-581631" is "Ready"
	I1217 08:32:21.951012  866708 pod_ready.go:86] duration metric: took 4.188719ms for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.952992  866708 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.956931  866708 pod_ready.go:94] pod "kube-apiserver-embed-certs-581631" is "Ready"
	I1217 08:32:21.956954  866708 pod_ready.go:86] duration metric: took 3.940004ms for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:21.958889  866708 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:22.340125  866708 pod_ready.go:94] pod "kube-controller-manager-embed-certs-581631" is "Ready"
	I1217 08:32:22.340165  866708 pod_ready.go:86] duration metric: took 381.252466ms for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:22.539721  866708 pod_ready.go:83] waiting for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:22.939424  866708 pod_ready.go:94] pod "kube-proxy-7z26t" is "Ready"
	I1217 08:32:22.939452  866708 pod_ready.go:86] duration metric: took 399.692811ms for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:23.140192  866708 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:23.540426  866708 pod_ready.go:94] pod "kube-scheduler-embed-certs-581631" is "Ready"
	I1217 08:32:23.540465  866708 pod_ready.go:86] duration metric: took 400.236944ms for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:23.540484  866708 pod_ready.go:40] duration metric: took 1.60543256s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:23.588350  866708 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:32:23.590603  866708 out.go:179] * Done! kubectl is now configured to use "embed-certs-581631" cluster and "default" namespace by default
	I1217 08:32:26.556175  876818 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 08:32:26.556252  876818 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 08:32:26.556377  876818 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 08:32:26.556450  876818 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 08:32:26.556515  876818 kubeadm.go:319] OS: Linux
	I1217 08:32:26.556622  876818 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 08:32:26.556686  876818 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 08:32:26.556759  876818 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 08:32:26.556827  876818 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 08:32:26.556897  876818 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 08:32:26.556963  876818 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 08:32:26.557031  876818 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 08:32:26.557094  876818 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 08:32:26.557191  876818 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 08:32:26.557272  876818 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 08:32:26.557426  876818 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 08:32:26.557524  876818 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 08:32:26.559101  876818 out.go:252]   - Generating certificates and keys ...
	I1217 08:32:26.559206  876818 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 08:32:26.559306  876818 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 08:32:26.559404  876818 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 08:32:26.559494  876818 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 08:32:26.559588  876818 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 08:32:26.559669  876818 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 08:32:26.559768  876818 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 08:32:26.559896  876818 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-225657 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 08:32:26.559944  876818 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 08:32:26.560063  876818 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-225657 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 08:32:26.560119  876818 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 08:32:26.560192  876818 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 08:32:26.560248  876818 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 08:32:26.560305  876818 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 08:32:26.560354  876818 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 08:32:26.560409  876818 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 08:32:26.560458  876818 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 08:32:26.560520  876818 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 08:32:26.560589  876818 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 08:32:26.560688  876818 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 08:32:26.560784  876818 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 08:32:26.563576  876818 out.go:252]   - Booting up control plane ...
	I1217 08:32:26.563704  876818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 08:32:26.563826  876818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 08:32:26.563906  876818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 08:32:26.564009  876818 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 08:32:26.564152  876818 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 08:32:26.564278  876818 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 08:32:26.564399  876818 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 08:32:26.564441  876818 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 08:32:26.564577  876818 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 08:32:26.564713  876818 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 08:32:26.564805  876818 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001759685s
	I1217 08:32:26.564919  876818 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 08:32:26.565037  876818 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1217 08:32:26.565141  876818 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 08:32:26.565222  876818 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 08:32:26.565289  876818 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.014387076s
	I1217 08:32:26.565342  876818 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.356564571s
	I1217 08:32:26.565447  876818 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00264337s
	I1217 08:32:26.565610  876818 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 08:32:26.565736  876818 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 08:32:26.565800  876818 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 08:32:26.565982  876818 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-225657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 08:32:26.566041  876818 kubeadm.go:319] [bootstrap-token] Using token: 5amo5u.ea0ubedundw2l43g
	I1217 08:32:26.567706  876818 out.go:252]   - Configuring RBAC rules ...
	I1217 08:32:26.567799  876818 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 08:32:26.567870  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 08:32:26.567982  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 08:32:26.568087  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 08:32:26.568181  876818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 08:32:26.568278  876818 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 08:32:26.568368  876818 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 08:32:26.568412  876818 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 08:32:26.568453  876818 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 08:32:26.568458  876818 kubeadm.go:319] 
	I1217 08:32:26.568525  876818 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 08:32:26.568544  876818 kubeadm.go:319] 
	I1217 08:32:26.568624  876818 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 08:32:26.568631  876818 kubeadm.go:319] 
	I1217 08:32:26.568650  876818 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 08:32:26.568715  876818 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 08:32:26.568761  876818 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 08:32:26.568765  876818 kubeadm.go:319] 
	I1217 08:32:26.568806  876818 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 08:32:26.568811  876818 kubeadm.go:319] 
	I1217 08:32:26.568848  876818 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 08:32:26.568853  876818 kubeadm.go:319] 
	I1217 08:32:26.568944  876818 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 08:32:26.569079  876818 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 08:32:26.569187  876818 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 08:32:26.569197  876818 kubeadm.go:319] 
	I1217 08:32:26.569320  876818 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 08:32:26.569417  876818 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 08:32:26.569424  876818 kubeadm.go:319] 
	I1217 08:32:26.569494  876818 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 5amo5u.ea0ubedundw2l43g \
	I1217 08:32:26.569666  876818 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 08:32:26.569717  876818 kubeadm.go:319] 	--control-plane 
	I1217 08:32:26.569727  876818 kubeadm.go:319] 
	I1217 08:32:26.569859  876818 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 08:32:26.569871  876818 kubeadm.go:319] 
	I1217 08:32:26.569979  876818 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 5amo5u.ea0ubedundw2l43g \
	I1217 08:32:26.570124  876818 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 08:32:26.570137  876818 cni.go:84] Creating CNI manager for ""
	I1217 08:32:26.570148  876818 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:26.572044  876818 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1217 08:32:25.008774  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	W1217 08:32:27.508833  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	I1217 08:32:26.573679  876818 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:32:26.578351  876818 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 08:32:26.578369  876818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:32:26.593905  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:32:26.826336  876818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:32:26.826621  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:26.826778  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-225657 minikube.k8s.io/updated_at=2025_12_17T08_32_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=default-k8s-diff-port-225657 minikube.k8s.io/primary=true
	I1217 08:32:26.840005  876818 ops.go:34] apiserver oom_adj: -16
	I1217 08:32:26.926850  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:27.427442  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:27.927518  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:28.427663  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:28.927809  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:29.427030  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:29.927781  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:30.427782  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:30.927463  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:31.427777  876818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:32:31.512412  876818 kubeadm.go:1114] duration metric: took 4.685935242s to wait for elevateKubeSystemPrivileges
	I1217 08:32:31.512448  876818 kubeadm.go:403] duration metric: took 16.003374717s to StartCluster
	I1217 08:32:31.512469  876818 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:31.512588  876818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:31.515589  876818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:32:31.515897  876818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:32:31.515919  876818 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:32:31.515990  876818 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:32:31.516136  876818 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-225657"
	I1217 08:32:31.516147  876818 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:31.516165  876818 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-225657"
	I1217 08:32:31.516199  876818 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-225657"
	I1217 08:32:31.516206  876818 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:32:31.516231  876818 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-225657"
	I1217 08:32:31.516689  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:31.516803  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:31.518011  876818 out.go:179] * Verifying Kubernetes components...
	I1217 08:32:31.521771  876818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:31.550749  876818 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:32:31.552521  876818 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:31.552580  876818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:32:31.552645  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:31.562897  876818 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-225657"
	I1217 08:32:31.563005  876818 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:32:31.563559  876818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:32:31.601968  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:31.620523  876818 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:31.620567  876818 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:32:31.620636  876818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:32:31.653313  876818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:32:31.664324  876818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:32:31.738146  876818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:32:31.779787  876818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:32:31.828233  876818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:32:31.985910  876818 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1217 08:32:31.988599  876818 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-225657" to be "Ready" ...
	I1217 08:32:32.201458  876818 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 08:32:30.008823  866074 node_ready.go:57] node "no-preload-936988" has "Ready":"False" status (will retry)
	I1217 08:32:31.508360  866074 node_ready.go:49] node "no-preload-936988" is "Ready"
	I1217 08:32:31.508395  866074 node_ready.go:38] duration metric: took 13.503607631s for node "no-preload-936988" to be "Ready" ...
	I1217 08:32:31.508411  866074 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:32:31.508466  866074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:32:31.526358  866074 api_server.go:72] duration metric: took 13.89313601s to wait for apiserver process to appear ...
	I1217 08:32:31.526391  866074 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:32:31.526418  866074 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 08:32:31.533062  866074 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 08:32:31.535433  866074 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 08:32:31.535473  866074 api_server.go:131] duration metric: took 9.073454ms to wait for apiserver health ...
	I1217 08:32:31.535486  866074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:32:31.547735  866074 system_pods.go:59] 8 kube-system pods found
	I1217 08:32:31.547786  866074 system_pods.go:61] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:31.547795  866074 system_pods.go:61] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:31.547804  866074 system_pods.go:61] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:31.547810  866074 system_pods.go:61] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:31.547819  866074 system_pods.go:61] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:31.547824  866074 system_pods.go:61] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:31.547830  866074 system_pods.go:61] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:31.547841  866074 system_pods.go:61] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:31.547922  866074 system_pods.go:74] duration metric: took 12.356025ms to wait for pod list to return data ...
	I1217 08:32:31.547959  866074 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:32:31.554847  866074 default_sa.go:45] found service account: "default"
	I1217 08:32:31.554884  866074 default_sa.go:55] duration metric: took 6.915973ms for default service account to be created ...
	I1217 08:32:31.554895  866074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:32:31.561162  866074 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:31.561207  866074 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:31.561216  866074 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:31.561225  866074 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:31.561233  866074 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:31.561238  866074 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:31.561243  866074 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:31.561247  866074 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:31.561254  866074 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:31.561296  866074 retry.go:31] will retry after 301.322132ms: missing components: kube-dns
	I1217 08:32:31.871587  866074 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:31.871634  866074 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:31.871641  866074 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:31.871650  866074 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:31.871656  866074 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:31.871663  866074 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:31.871670  866074 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:31.871675  866074 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:31.871683  866074 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:31.871702  866074 retry.go:31] will retry after 269.277981ms: missing components: kube-dns
	I1217 08:32:32.145106  866074 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:32.145140  866074 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:32.145147  866074 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:32.145154  866074 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:32.145157  866074 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:32.145161  866074 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:32.145165  866074 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:32.145168  866074 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:32.145175  866074 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:32.145196  866074 retry.go:31] will retry after 310.631471ms: missing components: kube-dns
	I1217 08:32:32.462205  866074 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:32.462251  866074 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:32:32.462261  866074 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:32.462270  866074 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:32.462275  866074 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:32.462281  866074 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:32.462285  866074 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:32.462291  866074 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:32.462299  866074 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:32:32.462326  866074 retry.go:31] will retry after 522.584802ms: missing components: kube-dns
	I1217 08:32:32.989792  866074 system_pods.go:86] 8 kube-system pods found
	I1217 08:32:32.989846  866074 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Running
	I1217 08:32:32.989856  866074 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running
	I1217 08:32:32.989863  866074 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running
	I1217 08:32:32.989870  866074 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running
	I1217 08:32:32.989875  866074 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running
	I1217 08:32:32.989880  866074 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running
	I1217 08:32:32.989886  866074 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running
	I1217 08:32:32.989891  866074 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Running
	I1217 08:32:32.989902  866074 system_pods.go:126] duration metric: took 1.434998823s to wait for k8s-apps to be running ...
	I1217 08:32:32.989912  866074 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:32:32.990071  866074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:32:33.008019  866074 system_svc.go:56] duration metric: took 18.095741ms WaitForService to wait for kubelet
	I1217 08:32:33.008139  866074 kubeadm.go:587] duration metric: took 15.374921494s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:33.008172  866074 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:32:33.011998  866074 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:32:33.012030  866074 node_conditions.go:123] node cpu capacity is 8
	I1217 08:32:33.012046  866074 node_conditions.go:105] duration metric: took 3.86867ms to run NodePressure ...
	I1217 08:32:33.012059  866074 start.go:242] waiting for startup goroutines ...
	I1217 08:32:33.012067  866074 start.go:247] waiting for cluster config update ...
	I1217 08:32:33.012081  866074 start.go:256] writing updated cluster config ...
	I1217 08:32:33.012372  866074 ssh_runner.go:195] Run: rm -f paused
	I1217 08:32:33.017424  866074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:33.022100  866074 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ssxts" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:33.028239  866074 pod_ready.go:94] pod "coredns-7d764666f9-ssxts" is "Ready"
	I1217 08:32:33.028273  866074 pod_ready.go:86] duration metric: took 6.139183ms for pod "coredns-7d764666f9-ssxts" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:33.032303  866074 pod_ready.go:83] waiting for pod "etcd-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:33.037809  866074 pod_ready.go:94] pod "etcd-no-preload-936988" is "Ready"
	I1217 08:32:33.037838  866074 pod_ready.go:86] duration metric: took 5.40925ms for pod "etcd-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:33.089430  866074 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:33.094775  866074 pod_ready.go:94] pod "kube-apiserver-no-preload-936988" is "Ready"
	I1217 08:32:33.094805  866074 pod_ready.go:86] duration metric: took 5.342784ms for pod "kube-apiserver-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:33.097110  866074 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:33.422475  866074 pod_ready.go:94] pod "kube-controller-manager-no-preload-936988" is "Ready"
	I1217 08:32:33.422509  866074 pod_ready.go:86] duration metric: took 325.373318ms for pod "kube-controller-manager-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:33.622173  866074 pod_ready.go:83] waiting for pod "kube-proxy-rrz8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:34.022864  866074 pod_ready.go:94] pod "kube-proxy-rrz8t" is "Ready"
	I1217 08:32:34.022899  866074 pod_ready.go:86] duration metric: took 400.695781ms for pod "kube-proxy-rrz8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:34.223750  866074 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:34.622868  866074 pod_ready.go:94] pod "kube-scheduler-no-preload-936988" is "Ready"
	I1217 08:32:34.622896  866074 pod_ready.go:86] duration metric: took 399.109058ms for pod "kube-scheduler-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:32:34.622907  866074 pod_ready.go:40] duration metric: took 1.605435436s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:32:34.673181  866074 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 08:32:34.676678  866074 out.go:179] * Done! kubectl is now configured to use "no-preload-936988" cluster and "default" namespace by default
	I1217 08:32:32.203794  876818 addons.go:530] duration metric: took 687.800297ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:32:32.493114  876818 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-225657" context rescaled to 1 replicas
	W1217 08:32:33.993165  876818 node_ready.go:57] node "default-k8s-diff-port-225657" has "Ready":"False" status (will retry)
	W1217 08:32:36.492749  876818 node_ready.go:57] node "default-k8s-diff-port-225657" has "Ready":"False" status (will retry)
	W1217 08:32:38.992545  876818 node_ready.go:57] node "default-k8s-diff-port-225657" has "Ready":"False" status (will retry)
	W1217 08:32:40.992631  876818 node_ready.go:57] node "default-k8s-diff-port-225657" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 17 08:32:31 no-preload-936988 crio[767]: time="2025-12-17T08:32:31.756696373Z" level=info msg="Starting container: 0d429967ce05685ae10e9e10849fbdda244af383957f5879acf8859c17df3cc0" id=5d0b45e0-b319-4894-9ed9-28a231485e35 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:32:31 no-preload-936988 crio[767]: time="2025-12-17T08:32:31.761472873Z" level=info msg="Started container" PID=2783 containerID=0d429967ce05685ae10e9e10849fbdda244af383957f5879acf8859c17df3cc0 description=kube-system/coredns-7d764666f9-ssxts/coredns id=5d0b45e0-b319-4894-9ed9-28a231485e35 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d590d8ecde580ab27c240f92f53e95a7caad9533a9f31d91dae9a3f8d1e1e6ee
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.146369555Z" level=info msg="Running pod sandbox: default/busybox/POD" id=da203fbe-f7c5-4b84-81b7-afa67a9a5b0d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.146472546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.154131088Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ae4d46df528ce0cb95ed72b2bde504c2d52d2c5c1be2d50ba1f3c24a479b5d06 UID:49a0ffc0-df8e-43e8-958f-ecdefc3a9cdc NetNS:/var/run/netns/21ede880-9965-4167-88ac-6f28676c51df Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008aaa68}] Aliases:map[]}"
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.1541725Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.165964277Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ae4d46df528ce0cb95ed72b2bde504c2d52d2c5c1be2d50ba1f3c24a479b5d06 UID:49a0ffc0-df8e-43e8-958f-ecdefc3a9cdc NetNS:/var/run/netns/21ede880-9965-4167-88ac-6f28676c51df Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008aaa68}] Aliases:map[]}"
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.166152196Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.168452417Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.169396483Z" level=info msg="Ran pod sandbox ae4d46df528ce0cb95ed72b2bde504c2d52d2c5c1be2d50ba1f3c24a479b5d06 with infra container: default/busybox/POD" id=da203fbe-f7c5-4b84-81b7-afa67a9a5b0d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.170847643Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8a786538-4e1b-4d6f-9e5b-06e7d877d1ac name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.171018074Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8a786538-4e1b-4d6f-9e5b-06e7d877d1ac name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.171067261Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8a786538-4e1b-4d6f-9e5b-06e7d877d1ac name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.171978473Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=440db6d4-deac-47d9-9ab1-7a39703bed29 name=/runtime.v1.ImageService/PullImage
	Dec 17 08:32:35 no-preload-936988 crio[767]: time="2025-12-17T08:32:35.173441592Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 08:32:36 no-preload-936988 crio[767]: time="2025-12-17T08:32:36.97619327Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=440db6d4-deac-47d9-9ab1-7a39703bed29 name=/runtime.v1.ImageService/PullImage
	Dec 17 08:32:36 no-preload-936988 crio[767]: time="2025-12-17T08:32:36.976931689Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f7c8bb0-d0e3-4368-a261-5097e87a4de0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:36 no-preload-936988 crio[767]: time="2025-12-17T08:32:36.978698458Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e368345d-180d-48c2-ac93-ac97263581d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:36 no-preload-936988 crio[767]: time="2025-12-17T08:32:36.982569547Z" level=info msg="Creating container: default/busybox/busybox" id=51710280-db43-41a7-a861-939b41e1e8ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:32:36 no-preload-936988 crio[767]: time="2025-12-17T08:32:36.982739362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:36 no-preload-936988 crio[767]: time="2025-12-17T08:32:36.986828663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:36 no-preload-936988 crio[767]: time="2025-12-17T08:32:36.987286109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:37 no-preload-936988 crio[767]: time="2025-12-17T08:32:37.033844525Z" level=info msg="Created container ce3ce6d50196dcbce46cbb8c40773a402142ec1dd916ec991bbc3f56c2b896ef: default/busybox/busybox" id=51710280-db43-41a7-a861-939b41e1e8ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:32:37 no-preload-936988 crio[767]: time="2025-12-17T08:32:37.034571517Z" level=info msg="Starting container: ce3ce6d50196dcbce46cbb8c40773a402142ec1dd916ec991bbc3f56c2b896ef" id=257e47bf-d925-4a8b-982b-7558052eb9d0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:32:37 no-preload-936988 crio[767]: time="2025-12-17T08:32:37.036432408Z" level=info msg="Started container" PID=2856 containerID=ce3ce6d50196dcbce46cbb8c40773a402142ec1dd916ec991bbc3f56c2b896ef description=default/busybox/busybox id=257e47bf-d925-4a8b-982b-7558052eb9d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae4d46df528ce0cb95ed72b2bde504c2d52d2c5c1be2d50ba1f3c24a479b5d06
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ce3ce6d50196d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   ae4d46df528ce       busybox                                     default
	0d429967ce056       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   d590d8ecde580       coredns-7d764666f9-ssxts                    kube-system
	ae26af67d7625       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   e7d2e9c7d2e9a       storage-provisioner                         kube-system
	cdc378360c691       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   cf873e72add13       kindnet-r9bn5                               kube-system
	e3c356d875829       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      27 seconds ago      Running             kube-proxy                0                   7706b19386ec0       kube-proxy-rrz8t                            kube-system
	689e3097c125b       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                      37 seconds ago      Running             kube-apiserver            0                   38bdb16af6243       kube-apiserver-no-preload-936988            kube-system
	2f89487568128       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      37 seconds ago      Running             etcd                      0                   88f6bec519a5a       etcd-no-preload-936988                      kube-system
	853356b15e800       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      37 seconds ago      Running             kube-scheduler            0                   02f002cf338ba       kube-scheduler-no-preload-936988            kube-system
	5075e4f79a1de       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      37 seconds ago      Running             kube-controller-manager   0                   ccf6e8c80d2d7       kube-controller-manager-no-preload-936988   kube-system
	
	
	==> coredns [0d429967ce05685ae10e9e10849fbdda244af383957f5879acf8859c17df3cc0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58374 - 54927 "HINFO IN 4038507675256348791.2900064202932271669. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037048193s
	
	
	==> describe nodes <==
	Name:               no-preload-936988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-936988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=no-preload-936988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_32_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:32:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-936988
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:32:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:32:43 +0000   Wed, 17 Dec 2025 08:32:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:32:43 +0000   Wed, 17 Dec 2025 08:32:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:32:43 +0000   Wed, 17 Dec 2025 08:32:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:32:43 +0000   Wed, 17 Dec 2025 08:32:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-936988
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                84138bfd-5159-42b8-821c-3ae7ad0e9cb0
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-ssxts                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-936988                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-r9bn5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-936988             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-936988    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-rrz8t                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-936988             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node no-preload-936988 event: Registered Node no-preload-936988 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [2f89487568128eac50753ae9c2d960847ed0e353143743cf0fa9afd3f925cc4d] <==
	{"level":"info","ts":"2025-12-17T08:32:08.136785Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T08:32:08.625107Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-17T08:32:08.625203Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-17T08:32:08.625262Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-12-17T08:32:08.625275Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:32:08.625290Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-12-17T08:32:08.626023Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-17T08:32:08.626092Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:32:08.626112Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-12-17T08:32:08.626119Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-17T08:32:08.627282Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T08:32:08.627895Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-936988 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T08:32:08.627914Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:32:08.627935Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:32:08.628204Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:32:08.628285Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T08:32:08.628597Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T08:32:08.628717Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T08:32:08.628778Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T08:32:08.628841Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-17T08:32:08.628949Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-17T08:32:08.629380Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:32:08.629387Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:32:08.632718Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T08:32:08.632869Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 08:32:45 up  2:15,  0 user,  load average: 4.93, 3.81, 2.69
	Linux no-preload-936988 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cdc378360c691a8e061f5748d72fd0c36dd16a11e2355a2e5d03ed4f8023b3aa] <==
	I1217 08:32:20.733420       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:32:20.733700       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 08:32:20.733850       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:32:20.733869       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:32:20.733889       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:32:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:32:21.029320       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:32:21.029427       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:32:21.029472       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:32:21.029732       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:32:21.398229       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:32:21.398289       1 metrics.go:72] Registering metrics
	I1217 08:32:21.398372       1 controller.go:711] "Syncing nftables rules"
	I1217 08:32:31.029957       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:32:31.030043       1 main.go:301] handling current node
	I1217 08:32:41.029958       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:32:41.030000       1 main.go:301] handling current node
	
	
	==> kube-apiserver [689e3097c125b7f14ea233b1f6ebbcf4ad4f4de5502d705d1820b0c39f3b5efa] <==
	I1217 08:32:09.843101       1 cache.go:39] Caches are synced for autoregister controller
	I1217 08:32:09.843274       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 08:32:09.844750       1 controller.go:667] quota admission added evaluator for: namespaces
	E1217 08:32:09.845879       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1217 08:32:09.847507       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1217 08:32:09.881575       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:32:10.049155       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:32:10.748376       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1217 08:32:10.753992       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1217 08:32:10.754016       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 08:32:11.308461       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:32:11.356689       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:32:11.452030       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 08:32:11.459016       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1217 08:32:11.460155       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:32:11.464979       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:32:11.779976       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:32:12.577198       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:32:12.588627       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 08:32:12.598186       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 08:32:17.281563       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:32:17.287892       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:32:17.629221       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 08:32:17.734343       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1217 08:32:43.944506       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:60994: use of closed network connection
	
	
	==> kube-controller-manager [5075e4f79a1de3e29d3017623a682bec0c3f275c47aa36515837d1895bc9e6c7] <==
	I1217 08:32:16.584716       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.584755       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.584804       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.585898       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.586625       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.587463       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.587842       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.587864       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.587874       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.587897       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.588495       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.588508       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.588546       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.590574       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.589448       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.590730       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.591230       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.594584       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:32:16.596179       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.598326       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-936988" podCIDRs=["10.244.0.0/24"]
	I1217 08:32:16.690999       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:16.691051       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 08:32:16.691057       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 08:32:16.695092       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:31.586044       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [e3c356d87582904e9a0479be47be0f36cb246a9dfb58bb7e49126688e7546c3f] <==
	I1217 08:32:18.120712       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:32:18.196907       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:32:18.297000       1 shared_informer.go:377] "Caches are synced"
	I1217 08:32:18.297039       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 08:32:18.297168       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:32:18.317317       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:32:18.317381       1 server_linux.go:136] "Using iptables Proxier"
	I1217 08:32:18.322767       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:32:18.323666       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 08:32:18.323751       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:32:18.325846       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:32:18.325869       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:32:18.325883       1 config.go:309] "Starting node config controller"
	I1217 08:32:18.325894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:32:18.325972       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:32:18.325998       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:32:18.325866       1 config.go:200] "Starting service config controller"
	I1217 08:32:18.326051       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:32:18.426427       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:32:18.426458       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:32:18.426467       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:32:18.426466       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [853356b15e80095a0ae8db10bcab3fd3bc3c4305265341f1defd62d550b29243] <==
	E1217 08:32:09.805775       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 08:32:09.805811       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 08:32:09.805850       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1217 08:32:09.805887       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 08:32:09.805923       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 08:32:09.805963       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 08:32:09.805963       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 08:32:09.805900       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 08:32:09.806029       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1217 08:32:09.805789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 08:32:09.806155       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 08:32:09.806163       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 08:32:10.653398       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 08:32:10.696968       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 08:32:10.697156       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1217 08:32:10.747104       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 08:32:10.770048       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 08:32:10.794559       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 08:32:10.805344       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 08:32:10.866647       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 08:32:10.959287       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 08:32:11.025747       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 08:32:11.042011       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 08:32:11.112828       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I1217 08:32:13.798816       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 08:32:17 no-preload-936988 kubelet[2187]: I1217 08:32:17.734591    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/255a4a0d-7a79-4ee3-93ad-921d40978251-cni-cfg\") pod \"kindnet-r9bn5\" (UID: \"255a4a0d-7a79-4ee3-93ad-921d40978251\") " pod="kube-system/kindnet-r9bn5"
	Dec 17 08:32:17 no-preload-936988 kubelet[2187]: I1217 08:32:17.734613    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/255a4a0d-7a79-4ee3-93ad-921d40978251-xtables-lock\") pod \"kindnet-r9bn5\" (UID: \"255a4a0d-7a79-4ee3-93ad-921d40978251\") " pod="kube-system/kindnet-r9bn5"
	Dec 17 08:32:17 no-preload-936988 kubelet[2187]: I1217 08:32:17.734636    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b40fd988-a562-4c15-96e2-da3ecd348a8f-lib-modules\") pod \"kube-proxy-rrz8t\" (UID: \"b40fd988-a562-4c15-96e2-da3ecd348a8f\") " pod="kube-system/kube-proxy-rrz8t"
	Dec 17 08:32:17 no-preload-936988 kubelet[2187]: I1217 08:32:17.734659    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/255a4a0d-7a79-4ee3-93ad-921d40978251-lib-modules\") pod \"kindnet-r9bn5\" (UID: \"255a4a0d-7a79-4ee3-93ad-921d40978251\") " pod="kube-system/kindnet-r9bn5"
	Dec 17 08:32:17 no-preload-936988 kubelet[2187]: I1217 08:32:17.734680    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b40fd988-a562-4c15-96e2-da3ecd348a8f-kube-proxy\") pod \"kube-proxy-rrz8t\" (UID: \"b40fd988-a562-4c15-96e2-da3ecd348a8f\") " pod="kube-system/kube-proxy-rrz8t"
	Dec 17 08:32:18 no-preload-936988 kubelet[2187]: E1217 08:32:18.552628    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-936988" containerName="kube-scheduler"
	Dec 17 08:32:18 no-preload-936988 kubelet[2187]: I1217 08:32:18.565795    2187 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-rrz8t" podStartSLOduration=1.565777326 podStartE2EDuration="1.565777326s" podCreationTimestamp="2025-12-17 08:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:18.462816039 +0000 UTC m=+6.137721493" watchObservedRunningTime="2025-12-17 08:32:18.565777326 +0000 UTC m=+6.240682778"
	Dec 17 08:32:19 no-preload-936988 kubelet[2187]: E1217 08:32:19.745919    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-936988" containerName="kube-apiserver"
	Dec 17 08:32:21 no-preload-936988 kubelet[2187]: I1217 08:32:21.480569    2187 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-r9bn5" podStartSLOduration=1.9476932580000001 podStartE2EDuration="4.480508437s" podCreationTimestamp="2025-12-17 08:32:17 +0000 UTC" firstStartedPulling="2025-12-17 08:32:17.978918811 +0000 UTC m=+5.653824207" lastFinishedPulling="2025-12-17 08:32:20.511734004 +0000 UTC m=+8.186639386" observedRunningTime="2025-12-17 08:32:21.480133643 +0000 UTC m=+9.155039061" watchObservedRunningTime="2025-12-17 08:32:21.480508437 +0000 UTC m=+9.155413838"
	Dec 17 08:32:25 no-preload-936988 kubelet[2187]: E1217 08:32:25.325454    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-936988" containerName="etcd"
	Dec 17 08:32:25 no-preload-936988 kubelet[2187]: E1217 08:32:25.368097    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-936988" containerName="kube-controller-manager"
	Dec 17 08:32:28 no-preload-936988 kubelet[2187]: E1217 08:32:28.557230    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-936988" containerName="kube-scheduler"
	Dec 17 08:32:29 no-preload-936988 kubelet[2187]: E1217 08:32:29.751767    2187 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-936988" containerName="kube-apiserver"
	Dec 17 08:32:31 no-preload-936988 kubelet[2187]: I1217 08:32:31.299432    2187 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 17 08:32:31 no-preload-936988 kubelet[2187]: I1217 08:32:31.432921    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l74tl\" (UniqueName: \"kubernetes.io/projected/db9873f8-e8db-4baa-8894-8deb3f48e4d7-kube-api-access-l74tl\") pod \"coredns-7d764666f9-ssxts\" (UID: \"db9873f8-e8db-4baa-8894-8deb3f48e4d7\") " pod="kube-system/coredns-7d764666f9-ssxts"
	Dec 17 08:32:31 no-preload-936988 kubelet[2187]: I1217 08:32:31.432967    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9765b268-a3ba-4b1f-ac24-d1ad7e741f2f-tmp\") pod \"storage-provisioner\" (UID: \"9765b268-a3ba-4b1f-ac24-d1ad7e741f2f\") " pod="kube-system/storage-provisioner"
	Dec 17 08:32:31 no-preload-936988 kubelet[2187]: I1217 08:32:31.432986    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm64q\" (UniqueName: \"kubernetes.io/projected/9765b268-a3ba-4b1f-ac24-d1ad7e741f2f-kube-api-access-dm64q\") pod \"storage-provisioner\" (UID: \"9765b268-a3ba-4b1f-ac24-d1ad7e741f2f\") " pod="kube-system/storage-provisioner"
	Dec 17 08:32:31 no-preload-936988 kubelet[2187]: I1217 08:32:31.433007    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db9873f8-e8db-4baa-8894-8deb3f48e4d7-config-volume\") pod \"coredns-7d764666f9-ssxts\" (UID: \"db9873f8-e8db-4baa-8894-8deb3f48e4d7\") " pod="kube-system/coredns-7d764666f9-ssxts"
	Dec 17 08:32:32 no-preload-936988 kubelet[2187]: E1217 08:32:32.491820    2187 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ssxts" containerName="coredns"
	Dec 17 08:32:32 no-preload-936988 kubelet[2187]: I1217 08:32:32.504717    2187 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.504693032 podStartE2EDuration="14.504693032s" podCreationTimestamp="2025-12-17 08:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:32.503315597 +0000 UTC m=+20.178220997" watchObservedRunningTime="2025-12-17 08:32:32.504693032 +0000 UTC m=+20.179598431"
	Dec 17 08:32:32 no-preload-936988 kubelet[2187]: I1217 08:32:32.520673    2187 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ssxts" podStartSLOduration=15.520656130999999 podStartE2EDuration="15.520656131s" podCreationTimestamp="2025-12-17 08:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:32.519960281 +0000 UTC m=+20.194865681" watchObservedRunningTime="2025-12-17 08:32:32.520656131 +0000 UTC m=+20.195561530"
	Dec 17 08:32:33 no-preload-936988 kubelet[2187]: E1217 08:32:33.495627    2187 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ssxts" containerName="coredns"
	Dec 17 08:32:34 no-preload-936988 kubelet[2187]: E1217 08:32:34.498352    2187 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ssxts" containerName="coredns"
	Dec 17 08:32:34 no-preload-936988 kubelet[2187]: I1217 08:32:34.957338    2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8zsh\" (UniqueName: \"kubernetes.io/projected/49a0ffc0-df8e-43e8-958f-ecdefc3a9cdc-kube-api-access-x8zsh\") pod \"busybox\" (UID: \"49a0ffc0-df8e-43e8-958f-ecdefc3a9cdc\") " pod="default/busybox"
	Dec 17 08:32:37 no-preload-936988 kubelet[2187]: I1217 08:32:37.518030    2187 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.71159248 podStartE2EDuration="3.518009586s" podCreationTimestamp="2025-12-17 08:32:34 +0000 UTC" firstStartedPulling="2025-12-17 08:32:35.171521381 +0000 UTC m=+22.846426759" lastFinishedPulling="2025-12-17 08:32:36.977938474 +0000 UTC m=+24.652843865" observedRunningTime="2025-12-17 08:32:37.517928341 +0000 UTC m=+25.192833743" watchObservedRunningTime="2025-12-17 08:32:37.518009586 +0000 UTC m=+25.192914985"
	
	
	==> storage-provisioner [ae26af67d762508ef9d4c7e54a959866b345762fd7c6fd7a3a11a826cc9b6166] <==
	I1217 08:32:31.763104       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:32:31.779367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:32:31.779443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:32:31.784444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:31.792072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:32:31.792277       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:32:31.792961       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"548a8188-8abf-4425-8621-70755d3b9167", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-936988_7a285ebb-d913-49ea-a32f-58c0d9e331c0 became leader
	I1217 08:32:31.793009       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-936988_7a285ebb-d913-49ea-a32f-58c0d9e331c0!
	W1217 08:32:31.806507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:31.825442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:32:31.893332       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-936988_7a285ebb-d913-49ea-a32f-58c0d9e331c0!
	W1217 08:32:33.830411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:33.837815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:35.841157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:35.845874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:37.848499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:37.854574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:39.857732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:39.861824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:41.865055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:41.871366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:43.874906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:43.879497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-936988 -n no-preload-936988
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-936988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (294.055357ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-225657 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-225657 describe deploy/metrics-server -n kube-system: exit status 1 (73.058193ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-225657 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-225657
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-225657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57",
	        "Created": "2025-12-17T08:32:08.014706364Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 878540,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:32:08.080070803Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/hostname",
	        "HostsPath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/hosts",
	        "LogPath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57-json.log",
	        "Name": "/default-k8s-diff-port-225657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-225657:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-225657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57",
	                "LowerDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-225657",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-225657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-225657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-225657",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-225657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bd5b75774b425f17ab02a74a481abf8c280e38f0f4779e890b3b25a373184313",
	            "SandboxKey": "/var/run/docker/netns/bd5b75774b42",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-225657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "370bb36dbd55007644b4cd15d494d08d5f62e1e604dbe8d80d1e7f9877cb1b79",
	                    "EndpointID": "c8678f92f8d82727decc55679c297ba2b21d5af466c364480dd4c62eb05efb63",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "7e:16:b6:f7:45:e7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-225657",
	                        "79798ebda184"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657
E1217 08:32:56.607020  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:32:56.613497  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:32:56.624929  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:32:56.646444  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:32:56.688319  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:32:56.770185  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:32:56.931610  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-225657 logs -n 25
E1217 08:32:57.253913  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:32:57.895523  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-225657 logs -n 25: (1.362560677s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-055130 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │                     │
	│ ssh     │ -p bridge-055130 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo containerd config dump                                                                                                                                                                                                  │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo crio config                                                                                                                                                                                                             │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ delete  │ -p bridge-055130                                                                                                                                                                                                                              │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-606497                                                                                                                                                                                                               │ disable-driver-mounts-606497 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-581631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p old-k8s-version-640910 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ stop    │ -p embed-certs-581631 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p no-preload-936988 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-640910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:32:51
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:32:51.219370  886345 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:32:51.219510  886345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:32:51.219518  886345 out.go:374] Setting ErrFile to fd 2...
	I1217 08:32:51.219523  886345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:32:51.219756  886345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:32:51.220232  886345 out.go:368] Setting JSON to false
	I1217 08:32:51.221336  886345 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8116,"bootTime":1765952255,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:32:51.221406  886345 start.go:143] virtualization: kvm guest
	I1217 08:32:51.223378  886345 out.go:179] * [embed-certs-581631] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:32:51.224968  886345 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:32:51.224966  886345 notify.go:221] Checking for updates...
	I1217 08:32:51.227782  886345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:32:51.229250  886345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:32:51.230886  886345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:32:51.232579  886345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:32:51.233988  886345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:32:51.235663  886345 config.go:182] Loaded profile config "embed-certs-581631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:51.236257  886345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:32:51.260987  886345 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:32:51.261147  886345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:32:51.318805  886345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-17 08:32:51.308119668 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:32:51.318913  886345 docker.go:319] overlay module found
	I1217 08:32:51.320842  886345 out.go:179] * Using the docker driver based on existing profile
	I1217 08:32:51.322072  886345 start.go:309] selected driver: docker
	I1217 08:32:51.322093  886345 start.go:927] validating driver "docker" against &{Name:embed-certs-581631 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-581631 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:51.322184  886345 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:32:51.322840  886345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:32:51.378859  886345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-17 08:32:51.368295264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:32:51.379224  886345 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:32:51.379262  886345 cni.go:84] Creating CNI manager for ""
	I1217 08:32:51.379331  886345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:32:51.379374  886345 start.go:353] cluster config:
	{Name:embed-certs-581631 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-581631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:32:51.381455  886345 out.go:179] * Starting "embed-certs-581631" primary control-plane node in "embed-certs-581631" cluster
	I1217 08:32:51.383093  886345 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:32:51.384590  886345 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:32:51.385859  886345 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:32:51.385898  886345 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 08:32:51.385909  886345 cache.go:65] Caching tarball of preloaded images
	I1217 08:32:51.385992  886345 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:32:51.386054  886345 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:32:51.386073  886345 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 08:32:51.386195  886345 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/embed-certs-581631/config.json ...
	I1217 08:32:51.408209  886345 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:32:51.408234  886345 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:32:51.408250  886345 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:32:51.408291  886345 start.go:360] acquireMachinesLock for embed-certs-581631: {Name:mk8aa81339897bc4f23ef5e51f0cb6b693010ab2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:32:51.408350  886345 start.go:364] duration metric: took 40.534µs to acquireMachinesLock for "embed-certs-581631"
	I1217 08:32:51.408367  886345 start.go:96] Skipping create...Using existing machine configuration
	I1217 08:32:51.408374  886345 fix.go:54] fixHost starting: 
	I1217 08:32:51.408618  886345 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:51.426266  886345 fix.go:112] recreateIfNeeded on embed-certs-581631: state=Stopped err=<nil>
	W1217 08:32:51.426299  886345 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 08:32:50.019638  885608 out.go:252] * Restarting existing docker container for "old-k8s-version-640910" ...
	I1217 08:32:50.019763  885608 cli_runner.go:164] Run: docker start old-k8s-version-640910
	I1217 08:32:50.294354  885608 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:32:50.316473  885608 kic.go:432] container "old-k8s-version-640910" state is running.
	I1217 08:32:50.316962  885608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-640910
	I1217 08:32:50.339037  885608 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/old-k8s-version-640910/config.json ...
	I1217 08:32:50.339318  885608 machine.go:94] provisionDockerMachine start ...
	I1217 08:32:50.339416  885608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:50.359882  885608 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:50.360043  885608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1217 08:32:50.360060  885608 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:32:50.360736  885608 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45484->127.0.0.1:33515: read: connection reset by peer
	I1217 08:32:53.497675  885608 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-640910
	
	I1217 08:32:53.497720  885608 ubuntu.go:182] provisioning hostname "old-k8s-version-640910"
	I1217 08:32:53.497813  885608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:53.517848  885608 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:53.517981  885608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1217 08:32:53.517996  885608 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-640910 && echo "old-k8s-version-640910" | sudo tee /etc/hostname
	I1217 08:32:53.659981  885608 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-640910
	
	I1217 08:32:53.660062  885608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:53.680488  885608 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:53.680649  885608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1217 08:32:53.680672  885608 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-640910' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-640910/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-640910' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:32:53.812743  885608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:32:53.812780  885608 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:32:53.812813  885608 ubuntu.go:190] setting up certificates
	I1217 08:32:53.812826  885608 provision.go:84] configureAuth start
	I1217 08:32:53.812892  885608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-640910
	I1217 08:32:53.831873  885608 provision.go:143] copyHostCerts
	I1217 08:32:53.831957  885608 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:32:53.831979  885608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:32:53.832073  885608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:32:53.832210  885608 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:32:53.832223  885608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:32:53.832264  885608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:32:53.832355  885608 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:32:53.832366  885608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:32:53.832402  885608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:32:53.832481  885608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-640910 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-640910]
	I1217 08:32:54.027706  885608 provision.go:177] copyRemoteCerts
	I1217 08:32:54.027770  885608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:32:54.027810  885608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:54.046887  885608 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:54.141239  885608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:32:54.159884  885608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1217 08:32:54.178278  885608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 08:32:54.196911  885608 provision.go:87] duration metric: took 384.070518ms to configureAuth
	I1217 08:32:54.196942  885608 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:32:54.197126  885608 config.go:182] Loaded profile config "old-k8s-version-640910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 08:32:54.197239  885608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:54.216035  885608 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:54.216139  885608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1217 08:32:54.216154  885608 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:32:54.525142  885608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:32:54.525171  885608 machine.go:97] duration metric: took 4.185832458s to provisionDockerMachine
	I1217 08:32:54.525186  885608 start.go:293] postStartSetup for "old-k8s-version-640910" (driver="docker")
	I1217 08:32:54.525200  885608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:32:54.525273  885608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:32:54.525336  885608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:54.545423  885608 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:54.640497  885608 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:32:54.644600  885608 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:32:54.644634  885608 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:32:54.644650  885608 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:32:54.644732  885608 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:32:54.644834  885608 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:32:54.644960  885608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:32:54.652953  885608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:54.672065  885608 start.go:296] duration metric: took 146.862253ms for postStartSetup
	I1217 08:32:54.672161  885608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:32:54.672211  885608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:54.690644  885608 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:54.781878  885608 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:32:54.786901  885608 fix.go:56] duration metric: took 4.788473986s for fixHost
	I1217 08:32:54.786932  885608 start.go:83] releasing machines lock for "old-k8s-version-640910", held for 4.788545868s
	I1217 08:32:54.787005  885608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-640910
	I1217 08:32:54.806663  885608 ssh_runner.go:195] Run: cat /version.json
	I1217 08:32:54.806742  885608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:54.806744  885608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:32:54.806922  885608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:32:54.828062  885608 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:54.828366  885608 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:32:54.977335  885608 ssh_runner.go:195] Run: systemctl --version
	I1217 08:32:54.984640  885608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:32:55.022337  885608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:32:55.027359  885608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:32:55.027425  885608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:32:55.036277  885608 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 08:32:55.036310  885608 start.go:496] detecting cgroup driver to use...
	I1217 08:32:55.036347  885608 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:32:55.036390  885608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:32:55.052885  885608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:32:55.066589  885608 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:32:55.066654  885608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:32:55.083768  885608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:32:55.097863  885608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:32:55.185883  885608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:32:55.271042  885608 docker.go:234] disabling docker service ...
	I1217 08:32:55.271117  885608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:32:55.285950  885608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:32:55.299159  885608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:32:55.380320  885608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:32:55.466434  885608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:32:55.479685  885608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:32:55.497056  885608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1217 08:32:55.497128  885608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:55.507899  885608 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:32:55.507975  885608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:55.518500  885608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:55.528427  885608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:55.537957  885608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:32:55.547758  885608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:55.557844  885608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:55.567453  885608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:32:55.577582  885608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:32:55.585949  885608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:32:55.594259  885608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:32:55.674255  885608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:32:55.840237  885608 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:32:55.840319  885608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:32:55.845809  885608 start.go:564] Will wait 60s for crictl version
	I1217 08:32:55.845876  885608 ssh_runner.go:195] Run: which crictl
	I1217 08:32:55.851221  885608 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:32:55.879433  885608 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:32:55.879529  885608 ssh_runner.go:195] Run: crio --version
	I1217 08:32:55.910560  885608 ssh_runner.go:195] Run: crio --version
	I1217 08:32:55.942690  885608 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1217 08:32:51.428171  886345 out.go:252] * Restarting existing docker container for "embed-certs-581631" ...
	I1217 08:32:51.428259  886345 cli_runner.go:164] Run: docker start embed-certs-581631
	I1217 08:32:51.708626  886345 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:32:51.729928  886345 kic.go:432] container "embed-certs-581631" state is running.
	I1217 08:32:51.730307  886345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-581631
	I1217 08:32:51.751803  886345 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/embed-certs-581631/config.json ...
	I1217 08:32:51.752088  886345 machine.go:94] provisionDockerMachine start ...
	I1217 08:32:51.752184  886345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:51.772796  886345 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:51.772945  886345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1217 08:32:51.772959  886345 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:32:51.773707  886345 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56980->127.0.0.1:33520: read: connection reset by peer
	I1217 08:32:54.908055  886345 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-581631
	
	I1217 08:32:54.908086  886345 ubuntu.go:182] provisioning hostname "embed-certs-581631"
	I1217 08:32:54.908172  886345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:54.928096  886345 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:54.928203  886345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1217 08:32:54.928216  886345 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-581631 && echo "embed-certs-581631" | sudo tee /etc/hostname
	I1217 08:32:55.068511  886345 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-581631
	
	I1217 08:32:55.068626  886345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:55.088380  886345 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:55.088554  886345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1217 08:32:55.088578  886345 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-581631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-581631/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-581631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:32:55.225720  886345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:32:55.225761  886345 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:32:55.225793  886345 ubuntu.go:190] setting up certificates
	I1217 08:32:55.225813  886345 provision.go:84] configureAuth start
	I1217 08:32:55.225909  886345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-581631
	I1217 08:32:55.244913  886345 provision.go:143] copyHostCerts
	I1217 08:32:55.244992  886345 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:32:55.245015  886345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:32:55.245085  886345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:32:55.245203  886345 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:32:55.245212  886345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:32:55.245240  886345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:32:55.245311  886345 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:32:55.245318  886345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:32:55.245343  886345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:32:55.245416  886345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.embed-certs-581631 san=[127.0.0.1 192.168.76.2 embed-certs-581631 localhost minikube]
	I1217 08:32:55.297636  886345 provision.go:177] copyRemoteCerts
	I1217 08:32:55.297697  886345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:32:55.297736  886345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:55.317291  886345 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:32:55.422419  886345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:32:55.441608  886345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 08:32:55.460788  886345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 08:32:55.480237  886345 provision.go:87] duration metric: took 254.4069ms to configureAuth
	I1217 08:32:55.480267  886345 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:32:55.480441  886345 config.go:182] Loaded profile config "embed-certs-581631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:32:55.480527  886345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:55.501060  886345 main.go:143] libmachine: Using SSH client type: native
	I1217 08:32:55.501182  886345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1217 08:32:55.501217  886345 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:32:55.845514  886345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:32:55.845557  886345 machine.go:97] duration metric: took 4.093448182s to provisionDockerMachine
	I1217 08:32:55.845572  886345 start.go:293] postStartSetup for "embed-certs-581631" (driver="docker")
	I1217 08:32:55.845586  886345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:32:55.845681  886345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:32:55.845737  886345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:55.868319  886345 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:32:55.968382  886345 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:32:55.972286  886345 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:32:55.972321  886345 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:32:55.972334  886345 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:32:55.972391  886345 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:32:55.972632  886345 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:32:55.972771  886345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:32:55.981095  886345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:32:56.000669  886345 start.go:296] duration metric: took 155.076164ms for postStartSetup
	I1217 08:32:56.000757  886345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:32:56.000809  886345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:56.022187  886345 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:32:56.117807  886345 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:32:56.124207  886345 fix.go:56] duration metric: took 4.715824004s for fixHost
	I1217 08:32:56.124243  886345 start.go:83] releasing machines lock for "embed-certs-581631", held for 4.7158819s
	I1217 08:32:56.124327  886345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-581631
	I1217 08:32:56.147523  886345 ssh_runner.go:195] Run: cat /version.json
	I1217 08:32:56.147592  886345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:56.147597  886345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:32:56.147674  886345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:32:56.171003  886345 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:32:56.171209  886345 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	
	
	==> CRI-O <==
	Dec 17 08:32:45 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:45.689237025Z" level=info msg="Starting container: 82c1612f24cd676f4a9367318bbe071a0a025de336327d908071878d79eb706e" id=bed46ccd-eb68-4a62-8d65-16a5b4847a5d name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:32:45 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:45.69134628Z" level=info msg="Started container" PID=1919 containerID=82c1612f24cd676f4a9367318bbe071a0a025de336327d908071878d79eb706e description=kube-system/coredns-66bc5c9577-4n72s/coredns id=bed46ccd-eb68-4a62-8d65-16a5b4847a5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4bfaf4d971bfea2678965d4a47fa79f24d5716cf30433a7af1dc4d57aa3f51e
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.345864245Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7cb475c8-58f8-4752-821b-081c9836a6d6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.345961606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.351515567Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c4354bafaff93efebb01cf29898c3731c9afdc62ec0e8546acc16469c2b7d759 UID:0a4eb68c-1efc-41c5-8d95-7bb4c25b10bc NetNS:/var/run/netns/302051da-5bf3-4931-aa93-f90e28cafba9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a440}] Aliases:map[]}"
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.351568681Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.361622216Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c4354bafaff93efebb01cf29898c3731c9afdc62ec0e8546acc16469c2b7d759 UID:0a4eb68c-1efc-41c5-8d95-7bb4c25b10bc NetNS:/var/run/netns/302051da-5bf3-4931-aa93-f90e28cafba9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a440}] Aliases:map[]}"
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.36177215Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.362514421Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.363318147Z" level=info msg="Ran pod sandbox c4354bafaff93efebb01cf29898c3731c9afdc62ec0e8546acc16469c2b7d759 with infra container: default/busybox/POD" id=7cb475c8-58f8-4752-821b-081c9836a6d6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.364737699Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=12168423-9db6-40c7-b76a-69a1ae69222e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.364855944Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=12168423-9db6-40c7-b76a-69a1ae69222e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.364891109Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=12168423-9db6-40c7-b76a-69a1ae69222e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.365499794Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2c08c47c-82af-4e56-bc4c-0fd35b9d0686 name=/runtime.v1.ImageService/PullImage
	Dec 17 08:32:48 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:48.366963938Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 08:32:50 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:50.282488219Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2c08c47c-82af-4e56-bc4c-0fd35b9d0686 name=/runtime.v1.ImageService/PullImage
	Dec 17 08:32:50 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:50.283273258Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e865409d-4bfc-437b-94ff-b64f876998c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:50 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:50.284810861Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5ce47761-c081-46b9-a082-8bdc59e67909 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:32:50 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:50.289267612Z" level=info msg="Creating container: default/busybox/busybox" id=6d63ef02-73e3-4705-9a96-1d0eadbb0162 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:32:50 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:50.28943846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:50 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:50.295065778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:50 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:50.295842819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:32:50 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:50.332449069Z" level=info msg="Created container 329b29fde74896c34d45de2068b3d9b5b04387fbf15625381d5a28c697f1f4a3: default/busybox/busybox" id=6d63ef02-73e3-4705-9a96-1d0eadbb0162 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:32:50 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:50.333252337Z" level=info msg="Starting container: 329b29fde74896c34d45de2068b3d9b5b04387fbf15625381d5a28c697f1f4a3" id=b6424cc3-10ec-4cd0-b1f0-ba7c5fe42bbd name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:32:50 default-k8s-diff-port-225657 crio[773]: time="2025-12-17T08:32:50.335756009Z" level=info msg="Started container" PID=1995 containerID=329b29fde74896c34d45de2068b3d9b5b04387fbf15625381d5a28c697f1f4a3 description=default/busybox/busybox id=b6424cc3-10ec-4cd0-b1f0-ba7c5fe42bbd name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4354bafaff93efebb01cf29898c3731c9afdc62ec0e8546acc16469c2b7d759
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	329b29fde7489       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   c4354bafaff93       busybox                                                default
	82c1612f24cd6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   c4bfaf4d971bf       coredns-66bc5c9577-4n72s                               kube-system
	ce049e5558994       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   691619cbd0cd4       storage-provisioner                                    kube-system
	6622f7749b43a       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   04633d71bc582       kindnet-s5z6t                                          kube-system
	d464787eb0407       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      25 seconds ago      Running             kube-proxy                0                   bf4b6c048b736       kube-proxy-7lhc6                                       kube-system
	780ed8b69eb58       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      36 seconds ago      Running             kube-apiserver            0                   d85b2e16ca8fa       kube-apiserver-default-k8s-diff-port-225657            kube-system
	a5a7a0b609594       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      36 seconds ago      Running             kube-scheduler            0                   9697c06b39b22       kube-scheduler-default-k8s-diff-port-225657            kube-system
	dfe78e2eb3e2e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      36 seconds ago      Running             etcd                      0                   923b12e94d356       etcd-default-k8s-diff-port-225657                      kube-system
	cc333876cc66d       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      36 seconds ago      Running             kube-controller-manager   0                   7ee81025a5814       kube-controller-manager-default-k8s-diff-port-225657   kube-system
	
	
	==> coredns [82c1612f24cd676f4a9367318bbe071a0a025de336327d908071878d79eb706e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34314 - 56090 "HINFO IN 3351692381126339746.6872080057438842805. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04250377s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-225657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-225657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=default-k8s-diff-port-225657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_32_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:32:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-225657
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:32:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:32:56 +0000   Wed, 17 Dec 2025 08:32:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:32:56 +0000   Wed, 17 Dec 2025 08:32:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:32:56 +0000   Wed, 17 Dec 2025 08:32:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:32:56 +0000   Wed, 17 Dec 2025 08:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-225657
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                05461791-e89b-4d46-9592-b5168df83171
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-4n72s                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-225657                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-s5z6t                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-225657             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-225657    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-7lhc6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-225657             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node default-k8s-diff-port-225657 event: Registered Node default-k8s-diff-port-225657 in Controller
	  Normal  NodeReady                12s   kubelet          Node default-k8s-diff-port-225657 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [dfe78e2eb3e2ed9a53e71c32d73a78ef3c7d369c07d630b5a7a549bca098e540] <==
	{"level":"warn","ts":"2025-12-17T08:32:22.815236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.823253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.832502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.839501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.846523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.854144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.861722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.877854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.884764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.897753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.905076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.912344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.920078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.927355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.934480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.942363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.949341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.956339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.963878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.972606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.979847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:22.998069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:23.005569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:23.012782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:32:23.060746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53652","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:32:58 up  2:15,  0 user,  load average: 4.40, 3.74, 2.68
	Linux default-k8s-diff-port-225657 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6622f7749b43aae8e156f5d5ef0f14897c1a37665c38de7349601031bb6cdc81] <==
	I1217 08:32:34.667967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:32:34.668340       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 08:32:34.668554       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:32:34.668585       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:32:34.668617       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:32:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:32:34.968102       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:32:34.968338       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:32:34.968414       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:32:35.066209       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:32:35.365480       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:32:35.365527       1 metrics.go:72] Registering metrics
	I1217 08:32:35.365683       1 controller.go:711] "Syncing nftables rules"
	I1217 08:32:44.969656       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:32:44.969746       1 main.go:301] handling current node
	I1217 08:32:54.967998       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:32:54.968047       1 main.go:301] handling current node
	
	
	==> kube-apiserver [780ed8b69eb586cab3693c2ef239117de0b57060cdc79b3595a1f49569233b10] <==
	I1217 08:32:23.639901       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:32:23.643669       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:32:23.643719       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 08:32:23.651934       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:32:23.652101       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 08:32:23.829045       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:32:24.443716       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 08:32:24.448047       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 08:32:24.448065       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:32:25.047844       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:32:25.088926       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:32:25.154747       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 08:32:25.163216       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1217 08:32:25.164670       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:32:25.169802       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:32:25.458477       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:32:25.957551       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:32:25.969973       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 08:32:25.978874       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 08:32:31.365560       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:32:31.371825       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:32:31.511481       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 08:32:31.511481       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 08:32:31.569410       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1217 08:32:56.135814       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:60084: use of closed network connection
	
	
	==> kube-controller-manager [cc333876cc66dde37f9ac3a60deac51827eff7c7034581b5cd8693c5478b77fb] <==
	I1217 08:32:30.457772       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 08:32:30.457863       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-225657"
	I1217 08:32:30.457864       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 08:32:30.457960       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 08:32:30.457876       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 08:32:30.458067       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:32:30.458097       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 08:32:30.457876       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 08:32:30.458273       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 08:32:30.458580       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 08:32:30.458648       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 08:32:30.458663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 08:32:30.458714       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 08:32:30.458920       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 08:32:30.458988       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 08:32:30.459006       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 08:32:30.459005       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 08:32:30.462074       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:32:30.463178       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 08:32:30.468527       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 08:32:30.471945       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 08:32:30.477365       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:32:30.481653       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 08:32:30.490055       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 08:32:45.459642       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d464787eb0407b46c1de0c353627876d6f991bcec6b7b0a14c0fa0083fbb2ff5] <==
	I1217 08:32:32.029122       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:32:32.104989       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:32:32.205258       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:32:32.205305       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 08:32:32.205463       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:32:32.228078       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:32:32.228150       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:32:32.234274       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:32:32.234671       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:32:32.234752       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:32:32.236519       1 config.go:200] "Starting service config controller"
	I1217 08:32:32.236591       1 config.go:309] "Starting node config controller"
	I1217 08:32:32.236601       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:32:32.236606       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:32:32.236623       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:32:32.236628       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:32:32.236646       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:32:32.236655       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:32:32.337234       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:32:32.337264       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:32:32.337246       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:32:32.337350       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [a5a7a0b6095943be47ba403ecfc93e5cad930263a1627226c00aa06abc489ef0] <==
	E1217 08:32:23.494516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 08:32:23.494649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 08:32:23.494733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 08:32:23.494902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 08:32:23.494924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 08:32:23.494934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 08:32:23.495480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 08:32:23.495510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 08:32:23.495521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 08:32:23.495963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 08:32:23.496001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 08:32:24.332291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 08:32:24.365305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 08:32:24.437018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 08:32:24.453231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 08:32:24.462599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 08:32:24.482153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 08:32:24.489605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 08:32:24.564142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 08:32:24.623552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 08:32:24.666740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 08:32:24.704429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 08:32:24.711869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 08:32:24.832465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1217 08:32:27.291320       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:32:26 default-k8s-diff-port-225657 kubelet[1334]: E1217 08:32:26.839275    1334 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-225657\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-225657"
	Dec 17 08:32:26 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:26.853718    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-225657" podStartSLOduration=1.853696681 podStartE2EDuration="1.853696681s" podCreationTimestamp="2025-12-17 08:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:26.853671115 +0000 UTC m=+1.142081257" watchObservedRunningTime="2025-12-17 08:32:26.853696681 +0000 UTC m=+1.142106828"
	Dec 17 08:32:26 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:26.864687    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-225657" podStartSLOduration=1.864661249 podStartE2EDuration="1.864661249s" podCreationTimestamp="2025-12-17 08:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:26.864650403 +0000 UTC m=+1.153060545" watchObservedRunningTime="2025-12-17 08:32:26.864661249 +0000 UTC m=+1.153071385"
	Dec 17 08:32:26 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:26.875930    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-225657" podStartSLOduration=1.875911291 podStartE2EDuration="1.875911291s" podCreationTimestamp="2025-12-17 08:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:26.875724016 +0000 UTC m=+1.164134158" watchObservedRunningTime="2025-12-17 08:32:26.875911291 +0000 UTC m=+1.164321428"
	Dec 17 08:32:26 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:26.903972    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-225657" podStartSLOduration=1.9039455589999998 podStartE2EDuration="1.903945559s" podCreationTimestamp="2025-12-17 08:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:26.890423553 +0000 UTC m=+1.178833695" watchObservedRunningTime="2025-12-17 08:32:26.903945559 +0000 UTC m=+1.192355702"
	Dec 17 08:32:30 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:30.439679    1334 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 08:32:30 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:30.440433    1334 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 08:32:31 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:31.618701    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a163468-2bc3-4ea8-84ae-bec91b54dd53-kube-proxy\") pod \"kube-proxy-7lhc6\" (UID: \"6a163468-2bc3-4ea8-84ae-bec91b54dd53\") " pod="kube-system/kube-proxy-7lhc6"
	Dec 17 08:32:31 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:31.618768    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29aebd79-e3bf-4715-b6f3-a8ea5baea1eb-xtables-lock\") pod \"kindnet-s5z6t\" (UID: \"29aebd79-e3bf-4715-b6f3-a8ea5baea1eb\") " pod="kube-system/kindnet-s5z6t"
	Dec 17 08:32:31 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:31.618805    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a163468-2bc3-4ea8-84ae-bec91b54dd53-lib-modules\") pod \"kube-proxy-7lhc6\" (UID: \"6a163468-2bc3-4ea8-84ae-bec91b54dd53\") " pod="kube-system/kube-proxy-7lhc6"
	Dec 17 08:32:31 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:31.618828    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gcdt\" (UniqueName: \"kubernetes.io/projected/6a163468-2bc3-4ea8-84ae-bec91b54dd53-kube-api-access-7gcdt\") pod \"kube-proxy-7lhc6\" (UID: \"6a163468-2bc3-4ea8-84ae-bec91b54dd53\") " pod="kube-system/kube-proxy-7lhc6"
	Dec 17 08:32:31 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:31.618851    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/29aebd79-e3bf-4715-b6f3-a8ea5baea1eb-cni-cfg\") pod \"kindnet-s5z6t\" (UID: \"29aebd79-e3bf-4715-b6f3-a8ea5baea1eb\") " pod="kube-system/kindnet-s5z6t"
	Dec 17 08:32:31 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:31.618872    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29aebd79-e3bf-4715-b6f3-a8ea5baea1eb-lib-modules\") pod \"kindnet-s5z6t\" (UID: \"29aebd79-e3bf-4715-b6f3-a8ea5baea1eb\") " pod="kube-system/kindnet-s5z6t"
	Dec 17 08:32:31 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:31.618901    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5qbh\" (UniqueName: \"kubernetes.io/projected/29aebd79-e3bf-4715-b6f3-a8ea5baea1eb-kube-api-access-v5qbh\") pod \"kindnet-s5z6t\" (UID: \"29aebd79-e3bf-4715-b6f3-a8ea5baea1eb\") " pod="kube-system/kindnet-s5z6t"
	Dec 17 08:32:31 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:31.618925    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a163468-2bc3-4ea8-84ae-bec91b54dd53-xtables-lock\") pod \"kube-proxy-7lhc6\" (UID: \"6a163468-2bc3-4ea8-84ae-bec91b54dd53\") " pod="kube-system/kube-proxy-7lhc6"
	Dec 17 08:32:34 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:34.245834    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7lhc6" podStartSLOduration=3.245809851 podStartE2EDuration="3.245809851s" podCreationTimestamp="2025-12-17 08:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:32.857125832 +0000 UTC m=+7.145535988" watchObservedRunningTime="2025-12-17 08:32:34.245809851 +0000 UTC m=+8.534219994"
	Dec 17 08:32:39 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:39.516803    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s5z6t" podStartSLOduration=6.037055208 podStartE2EDuration="8.516780443s" podCreationTimestamp="2025-12-17 08:32:31 +0000 UTC" firstStartedPulling="2025-12-17 08:32:31.893920702 +0000 UTC m=+6.182330829" lastFinishedPulling="2025-12-17 08:32:34.373645942 +0000 UTC m=+8.662056064" observedRunningTime="2025-12-17 08:32:34.884761218 +0000 UTC m=+9.173171360" watchObservedRunningTime="2025-12-17 08:32:39.516780443 +0000 UTC m=+13.805190585"
	Dec 17 08:32:45 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:45.303307    1334 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 08:32:45 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:45.415303    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c3d96f21-a7d0-459b-a164-e9cc1e73add9-tmp\") pod \"storage-provisioner\" (UID: \"c3d96f21-a7d0-459b-a164-e9cc1e73add9\") " pod="kube-system/storage-provisioner"
	Dec 17 08:32:45 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:45.415382    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b659d652-9af1-45eb-be9e-129cf428ab14-config-volume\") pod \"coredns-66bc5c9577-4n72s\" (UID: \"b659d652-9af1-45eb-be9e-129cf428ab14\") " pod="kube-system/coredns-66bc5c9577-4n72s"
	Dec 17 08:32:45 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:45.415424    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b4n2\" (UniqueName: \"kubernetes.io/projected/c3d96f21-a7d0-459b-a164-e9cc1e73add9-kube-api-access-6b4n2\") pod \"storage-provisioner\" (UID: \"c3d96f21-a7d0-459b-a164-e9cc1e73add9\") " pod="kube-system/storage-provisioner"
	Dec 17 08:32:45 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:45.415451    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hchz7\" (UniqueName: \"kubernetes.io/projected/b659d652-9af1-45eb-be9e-129cf428ab14-kube-api-access-hchz7\") pod \"coredns-66bc5c9577-4n72s\" (UID: \"b659d652-9af1-45eb-be9e-129cf428ab14\") " pod="kube-system/coredns-66bc5c9577-4n72s"
	Dec 17 08:32:45 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:45.913434    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4n72s" podStartSLOduration=14.913409432 podStartE2EDuration="14.913409432s" podCreationTimestamp="2025-12-17 08:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:45.899448623 +0000 UTC m=+20.187858765" watchObservedRunningTime="2025-12-17 08:32:45.913409432 +0000 UTC m=+20.201819573"
	Dec 17 08:32:45 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:45.926228    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.926204339 podStartE2EDuration="13.926204339s" podCreationTimestamp="2025-12-17 08:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:32:45.925791996 +0000 UTC m=+20.214202151" watchObservedRunningTime="2025-12-17 08:32:45.926204339 +0000 UTC m=+20.214614481"
	Dec 17 08:32:48 default-k8s-diff-port-225657 kubelet[1334]: I1217 08:32:48.132770    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcrbg\" (UniqueName: \"kubernetes.io/projected/0a4eb68c-1efc-41c5-8d95-7bb4c25b10bc-kube-api-access-hcrbg\") pod \"busybox\" (UID: \"0a4eb68c-1efc-41c5-8d95-7bb4c25b10bc\") " pod="default/busybox"
	
	
	==> storage-provisioner [ce049e55589947cee2d7dc838e9a3028a3c8bf83555dfef843f2a017abc9de5c] <==
	I1217 08:32:45.696717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:32:45.706271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:32:45.706325       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:32:45.708736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:45.716408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:32:45.716603       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:32:45.716829       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225657_13b36690-79f3-4eb4-9146-c84991cdf3df!
	I1217 08:32:45.716793       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0142102-aca8-44fd-b78e-ed774b3ecaf8", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-225657_13b36690-79f3-4eb4-9146-c84991cdf3df became leader
	W1217 08:32:45.719782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:45.723384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:32:45.817436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225657_13b36690-79f3-4eb4-9146-c84991cdf3df!
	W1217 08:32:47.726939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:47.732240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:49.735260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:49.739181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:51.742756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:51.748085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:53.751278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:53.755891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:55.759453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:55.764967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:57.769809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:32:57.776606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-225657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-581631 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-581631 --alsologtostderr -v=1: exit status 80 (2.339406445s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-581631 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:33:47.799798  897131 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:47.800111  897131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:47.800126  897131 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:47.800133  897131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:47.800408  897131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:47.800779  897131 out.go:368] Setting JSON to false
	I1217 08:33:47.800809  897131 mustload.go:66] Loading cluster: embed-certs-581631
	I1217 08:33:47.801178  897131 config.go:182] Loaded profile config "embed-certs-581631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:47.801681  897131 cli_runner.go:164] Run: docker container inspect embed-certs-581631 --format={{.State.Status}}
	I1217 08:33:47.825613  897131 host.go:66] Checking if "embed-certs-581631" exists ...
	I1217 08:33:47.825952  897131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:47.892823  897131 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-17 08:33:47.88267757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:47.893513  897131 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-581631 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 08:33:47.895843  897131 out.go:179] * Pausing node embed-certs-581631 ... 
	I1217 08:33:47.897735  897131 host.go:66] Checking if "embed-certs-581631" exists ...
	I1217 08:33:47.898096  897131 ssh_runner.go:195] Run: systemctl --version
	I1217 08:33:47.898151  897131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-581631
	I1217 08:33:47.918141  897131 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/embed-certs-581631/id_ed25519 Username:docker}
	I1217 08:33:48.013307  897131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:48.027366  897131 pause.go:52] kubelet running: true
	I1217 08:33:48.027426  897131 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:33:48.224036  897131 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:33:48.224133  897131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:33:48.305013  897131 cri.go:89] found id: "10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949"
	I1217 08:33:48.305045  897131 cri.go:89] found id: "74336b447d076eb6601b50d8a5bbc837099ef6eee8d219659ec500edaf3ae63a"
	I1217 08:33:48.305052  897131 cri.go:89] found id: "8c9c5cb7d5f07608866c6a6069eb19c6ff6912f829c84b1d6106b6f37984966b"
	I1217 08:33:48.305058  897131 cri.go:89] found id: "9f7ac5f3d2a15d2684611317f67a3c2210f9c68172e2cf5714548c2f9bd54ef3"
	I1217 08:33:48.305062  897131 cri.go:89] found id: "7ad4a121425718b62d24f7786ab1468fe9ccc850850ce4710f25d5a6e5b5f9e0"
	I1217 08:33:48.305083  897131 cri.go:89] found id: "79831ec89cc5a55b9420cabfb5188263a179325c1b809e4f1b11f241ca39131c"
	I1217 08:33:48.305088  897131 cri.go:89] found id: "c329f979a08a46f2a0e41d1fd5c750409c27319d839acebb55967a6c5075748c"
	I1217 08:33:48.305092  897131 cri.go:89] found id: "f79f6823e0e24a3f6a2e174ad04b8359e9c8d6e4bc205fed1c8b015b611cd6d2"
	I1217 08:33:48.305097  897131 cri.go:89] found id: "7d3db7fd1bb8c12d886bfea8c0b0731baaa3804b175ca5e69fc930ef2c9c3881"
	I1217 08:33:48.305106  897131 cri.go:89] found id: "6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187"
	I1217 08:33:48.305115  897131 cri.go:89] found id: "477f92307f3a906d587ee7835d84c2a4746df3a62118fdf285b9c4f1f4af8391"
	I1217 08:33:48.305119  897131 cri.go:89] found id: ""
	I1217 08:33:48.305185  897131 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:33:48.319041  897131 retry.go:31] will retry after 242.972154ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:48Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:33:48.562595  897131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:48.580119  897131 pause.go:52] kubelet running: false
	I1217 08:33:48.580205  897131 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:33:48.734249  897131 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:33:48.734355  897131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:33:48.820490  897131 cri.go:89] found id: "10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949"
	I1217 08:33:48.820517  897131 cri.go:89] found id: "74336b447d076eb6601b50d8a5bbc837099ef6eee8d219659ec500edaf3ae63a"
	I1217 08:33:48.820523  897131 cri.go:89] found id: "8c9c5cb7d5f07608866c6a6069eb19c6ff6912f829c84b1d6106b6f37984966b"
	I1217 08:33:48.820554  897131 cri.go:89] found id: "9f7ac5f3d2a15d2684611317f67a3c2210f9c68172e2cf5714548c2f9bd54ef3"
	I1217 08:33:48.820560  897131 cri.go:89] found id: "7ad4a121425718b62d24f7786ab1468fe9ccc850850ce4710f25d5a6e5b5f9e0"
	I1217 08:33:48.820567  897131 cri.go:89] found id: "79831ec89cc5a55b9420cabfb5188263a179325c1b809e4f1b11f241ca39131c"
	I1217 08:33:48.820571  897131 cri.go:89] found id: "c329f979a08a46f2a0e41d1fd5c750409c27319d839acebb55967a6c5075748c"
	I1217 08:33:48.820576  897131 cri.go:89] found id: "f79f6823e0e24a3f6a2e174ad04b8359e9c8d6e4bc205fed1c8b015b611cd6d2"
	I1217 08:33:48.820581  897131 cri.go:89] found id: "7d3db7fd1bb8c12d886bfea8c0b0731baaa3804b175ca5e69fc930ef2c9c3881"
	I1217 08:33:48.820592  897131 cri.go:89] found id: "6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187"
	I1217 08:33:48.820600  897131 cri.go:89] found id: "477f92307f3a906d587ee7835d84c2a4746df3a62118fdf285b9c4f1f4af8391"
	I1217 08:33:48.820618  897131 cri.go:89] found id: ""
	I1217 08:33:48.820662  897131 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:33:48.834053  897131 retry.go:31] will retry after 247.613881ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:48Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:33:49.082696  897131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:49.097902  897131 pause.go:52] kubelet running: false
	I1217 08:33:49.097964  897131 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:33:49.248224  897131 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:33:49.248307  897131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:33:49.322074  897131 cri.go:89] found id: "10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949"
	I1217 08:33:49.322098  897131 cri.go:89] found id: "74336b447d076eb6601b50d8a5bbc837099ef6eee8d219659ec500edaf3ae63a"
	I1217 08:33:49.322102  897131 cri.go:89] found id: "8c9c5cb7d5f07608866c6a6069eb19c6ff6912f829c84b1d6106b6f37984966b"
	I1217 08:33:49.322106  897131 cri.go:89] found id: "9f7ac5f3d2a15d2684611317f67a3c2210f9c68172e2cf5714548c2f9bd54ef3"
	I1217 08:33:49.322109  897131 cri.go:89] found id: "7ad4a121425718b62d24f7786ab1468fe9ccc850850ce4710f25d5a6e5b5f9e0"
	I1217 08:33:49.322115  897131 cri.go:89] found id: "79831ec89cc5a55b9420cabfb5188263a179325c1b809e4f1b11f241ca39131c"
	I1217 08:33:49.322119  897131 cri.go:89] found id: "c329f979a08a46f2a0e41d1fd5c750409c27319d839acebb55967a6c5075748c"
	I1217 08:33:49.322123  897131 cri.go:89] found id: "f79f6823e0e24a3f6a2e174ad04b8359e9c8d6e4bc205fed1c8b015b611cd6d2"
	I1217 08:33:49.322128  897131 cri.go:89] found id: "7d3db7fd1bb8c12d886bfea8c0b0731baaa3804b175ca5e69fc930ef2c9c3881"
	I1217 08:33:49.322137  897131 cri.go:89] found id: "6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187"
	I1217 08:33:49.322142  897131 cri.go:89] found id: "477f92307f3a906d587ee7835d84c2a4746df3a62118fdf285b9c4f1f4af8391"
	I1217 08:33:49.322146  897131 cri.go:89] found id: ""
	I1217 08:33:49.322193  897131 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:33:49.335660  897131 retry.go:31] will retry after 453.106402ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:49Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:33:49.789304  897131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:49.803178  897131 pause.go:52] kubelet running: false
	I1217 08:33:49.803241  897131 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:33:49.957762  897131 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:33:49.957874  897131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:33:50.035679  897131 cri.go:89] found id: "10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949"
	I1217 08:33:50.035729  897131 cri.go:89] found id: "74336b447d076eb6601b50d8a5bbc837099ef6eee8d219659ec500edaf3ae63a"
	I1217 08:33:50.035736  897131 cri.go:89] found id: "8c9c5cb7d5f07608866c6a6069eb19c6ff6912f829c84b1d6106b6f37984966b"
	I1217 08:33:50.035741  897131 cri.go:89] found id: "9f7ac5f3d2a15d2684611317f67a3c2210f9c68172e2cf5714548c2f9bd54ef3"
	I1217 08:33:50.035745  897131 cri.go:89] found id: "7ad4a121425718b62d24f7786ab1468fe9ccc850850ce4710f25d5a6e5b5f9e0"
	I1217 08:33:50.035750  897131 cri.go:89] found id: "79831ec89cc5a55b9420cabfb5188263a179325c1b809e4f1b11f241ca39131c"
	I1217 08:33:50.035754  897131 cri.go:89] found id: "c329f979a08a46f2a0e41d1fd5c750409c27319d839acebb55967a6c5075748c"
	I1217 08:33:50.035759  897131 cri.go:89] found id: "f79f6823e0e24a3f6a2e174ad04b8359e9c8d6e4bc205fed1c8b015b611cd6d2"
	I1217 08:33:50.035763  897131 cri.go:89] found id: "7d3db7fd1bb8c12d886bfea8c0b0731baaa3804b175ca5e69fc930ef2c9c3881"
	I1217 08:33:50.035773  897131 cri.go:89] found id: "6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187"
	I1217 08:33:50.035777  897131 cri.go:89] found id: "477f92307f3a906d587ee7835d84c2a4746df3a62118fdf285b9c4f1f4af8391"
	I1217 08:33:50.035782  897131 cri.go:89] found id: ""
	I1217 08:33:50.035833  897131 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:33:50.052030  897131 out.go:203] 
	W1217 08:33:50.053742  897131 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 08:33:50.053763  897131 out.go:285] * 
	* 
	W1217 08:33:50.059275  897131 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 08:33:50.061181  897131 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-581631 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-581631
helpers_test.go:244: (dbg) docker inspect embed-certs-581631:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3",
	        "Created": "2025-12-17T08:31:39.009229822Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 886545,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:32:51.456867196Z",
	            "FinishedAt": "2025-12-17T08:32:50.58064402Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/hosts",
	        "LogPath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3-json.log",
	        "Name": "/embed-certs-581631",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-581631:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-581631",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3",
	                "LowerDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-581631",
	                "Source": "/var/lib/docker/volumes/embed-certs-581631/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-581631",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-581631",
	                "name.minikube.sigs.k8s.io": "embed-certs-581631",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3ec15b47bb4633a2d5a3df2b7cc2f62f40d86614a7fce9f92b4d079b7a1d742b",
	            "SandboxKey": "/var/run/docker/netns/3ec15b47bb46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-581631": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1180462b720da0ae1fa73d0b014c57b2b6955441a1e7b7b4a2e5db28ef5abec",
	                    "EndpointID": "03e8514c473f7c25c4f20cd997a9f73f6e542a82f819df14d3939b1919c11702",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "da:a6:f0:52:b2:05",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-581631",
	                        "ce9b768a5250"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-581631 -n embed-certs-581631
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-581631 -n embed-certs-581631: exit status 2 (363.569014ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-581631 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-581631 logs -n 25: (1.248573822s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-055130 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo crio config                                                                                                                                                                                                             │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ delete  │ -p bridge-055130                                                                                                                                                                                                                              │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-606497                                                                                                                                                                                                               │ disable-driver-mounts-606497 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-581631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p old-k8s-version-640910 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ stop    │ -p embed-certs-581631 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p no-preload-936988 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-640910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225657 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p no-preload-936988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                   │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                               │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:33:16
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:33:16.728946  893657 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:16.729265  893657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:16.729278  893657 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:16.729285  893657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:16.729634  893657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:16.730240  893657 out.go:368] Setting JSON to false
	I1217 08:33:16.732006  893657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8142,"bootTime":1765952255,"procs":367,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:33:16.732103  893657 start.go:143] virtualization: kvm guest
	I1217 08:33:16.736563  893657 out.go:179] * [default-k8s-diff-port-225657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:33:16.738941  893657 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:33:16.738995  893657 notify.go:221] Checking for updates...
	I1217 08:33:16.742759  893657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:33:16.746850  893657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:16.748597  893657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:33:16.750659  893657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:33:16.753168  893657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:33:16.756488  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:16.757459  893657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:33:16.792888  893657 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:33:16.793019  893657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:16.867744  893657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 08:33:16.854455776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:16.867913  893657 docker.go:319] overlay module found
	I1217 08:33:16.871013  893657 out.go:179] * Using the docker driver based on existing profile
	I1217 08:33:16.873307  893657 start.go:309] selected driver: docker
	I1217 08:33:16.873331  893657 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:16.873487  893657 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:33:16.874376  893657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:16.951072  893657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 08:33:16.935361077 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:16.951510  893657 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:16.951573  893657 cni.go:84] Creating CNI manager for ""
	I1217 08:33:16.951645  893657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:16.951709  893657 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:16.956015  893657 out.go:179] * Starting "default-k8s-diff-port-225657" primary control-plane node in "default-k8s-diff-port-225657" cluster
	I1217 08:33:16.957479  893657 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:33:16.959060  893657 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:33:16.960240  893657 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:33:16.960283  893657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 08:33:16.960307  893657 cache.go:65] Caching tarball of preloaded images
	I1217 08:33:16.960329  893657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:33:16.960440  893657 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:33:16.960458  893657 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 08:33:16.960662  893657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:33:16.986877  893657 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:33:16.986906  893657 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:33:16.986928  893657 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:33:16.986979  893657 start.go:360] acquireMachinesLock for default-k8s-diff-port-225657: {Name:mkf524609fef75b896bc809c6c5673b68f778ced Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:33:16.987060  893657 start.go:364] duration metric: took 53.96µs to acquireMachinesLock for "default-k8s-diff-port-225657"
	I1217 08:33:16.987092  893657 start.go:96] Skipping create...Using existing machine configuration
	I1217 08:33:16.987100  893657 fix.go:54] fixHost starting: 
	I1217 08:33:16.987446  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:17.012833  893657 fix.go:112] recreateIfNeeded on default-k8s-diff-port-225657: state=Stopped err=<nil>
	W1217 08:33:17.012874  893657 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 08:33:13.213803  890801 addons.go:530] duration metric: took 2.18918486s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 08:33:13.705622  890801 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 08:33:13.710080  890801 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 08:33:13.711221  890801 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 08:33:13.711249  890801 api_server.go:131] duration metric: took 506.788041ms to wait for apiserver health ...
	I1217 08:33:13.711258  890801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:33:13.715494  890801 system_pods.go:59] 8 kube-system pods found
	I1217 08:33:13.715559  890801 system_pods.go:61] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:13.715571  890801 system_pods.go:61] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:13.715580  890801 system_pods.go:61] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:33:13.715587  890801 system_pods.go:61] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:13.715598  890801 system_pods.go:61] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:13.715604  890801 system_pods.go:61] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:13.715610  890801 system_pods.go:61] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:13.715617  890801 system_pods.go:61] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:33:13.715626  890801 system_pods.go:74] duration metric: took 4.361363ms to wait for pod list to return data ...
	I1217 08:33:13.715639  890801 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:33:13.718438  890801 default_sa.go:45] found service account: "default"
	I1217 08:33:13.718465  890801 default_sa.go:55] duration metric: took 2.817296ms for default service account to be created ...
	I1217 08:33:13.718477  890801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:33:13.722138  890801 system_pods.go:86] 8 kube-system pods found
	I1217 08:33:13.722180  890801 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:13.722194  890801 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:13.722204  890801 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:33:13.722214  890801 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:13.722223  890801 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:13.722234  890801 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:13.722243  890801 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:13.722259  890801 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:33:13.722272  890801 system_pods.go:126] duration metric: took 3.785279ms to wait for k8s-apps to be running ...
	I1217 08:33:13.722289  890801 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:33:13.722352  890801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:13.737774  890801 system_svc.go:56] duration metric: took 15.474847ms WaitForService to wait for kubelet
	I1217 08:33:13.737805  890801 kubeadm.go:587] duration metric: took 2.713427844s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:13.737833  890801 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:33:13.772714  890801 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:33:13.772756  890801 node_conditions.go:123] node cpu capacity is 8
	I1217 08:33:13.772775  890801 node_conditions.go:105] duration metric: took 34.937186ms to run NodePressure ...
	I1217 08:33:13.772792  890801 start.go:242] waiting for startup goroutines ...
	I1217 08:33:13.772803  890801 start.go:247] waiting for cluster config update ...
	I1217 08:33:13.772825  890801 start.go:256] writing updated cluster config ...
	I1217 08:33:13.773173  890801 ssh_runner.go:195] Run: rm -f paused
	I1217 08:33:13.777812  890801 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:13.783637  890801 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ssxts" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 08:33:15.868337  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:15.181390  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:17.182344  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:19.681167  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:17.003119  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:19.003173  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:21.003325  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:17.017254  893657 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-225657" ...
	I1217 08:33:17.017346  893657 cli_runner.go:164] Run: docker start default-k8s-diff-port-225657
	I1217 08:33:17.373663  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:17.400760  893657 kic.go:432] container "default-k8s-diff-port-225657" state is running.
	I1217 08:33:17.401442  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:17.429446  893657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:33:17.429718  893657 machine.go:94] provisionDockerMachine start ...
	I1217 08:33:17.429809  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:17.458096  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:17.458238  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:17.458254  893657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:33:17.459170  893657 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48512->127.0.0.1:33530: read: connection reset by peer
	I1217 08:33:20.612283  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:33:20.612308  893657 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-225657"
	I1217 08:33:20.612373  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:20.636332  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:20.636502  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:20.636519  893657 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-225657 && echo "default-k8s-diff-port-225657" | sudo tee /etc/hostname
	I1217 08:33:20.804510  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:33:20.804742  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:20.834923  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:20.835091  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:20.835140  893657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-225657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-225657/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-225657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:33:20.984217  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:33:20.984254  893657 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:33:20.984307  893657 ubuntu.go:190] setting up certificates
	I1217 08:33:20.984330  893657 provision.go:84] configureAuth start
	I1217 08:33:20.984434  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:21.010705  893657 provision.go:143] copyHostCerts
	I1217 08:33:21.010798  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:33:21.010816  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:33:21.010896  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:33:21.011010  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:33:21.011024  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:33:21.011068  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:33:21.011154  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:33:21.011165  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:33:21.011204  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:33:21.011353  893657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-225657 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-225657 localhost minikube]
	I1217 08:33:21.094979  893657 provision.go:177] copyRemoteCerts
	I1217 08:33:21.095063  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:33:21.095123  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:21.119755  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:21.226499  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:33:21.252430  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 08:33:21.276413  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 08:33:21.304875  893657 provision.go:87] duration metric: took 320.523082ms to configureAuth
	I1217 08:33:21.304910  893657 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:33:21.305140  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:21.305286  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:21.329333  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:21.329469  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:21.329488  893657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1217 08:33:18.289602  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:20.292744  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:22.296974  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:21.764845  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:24.179988  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	I1217 08:33:22.731689  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:33:22.731722  893657 machine.go:97] duration metric: took 5.301986136s to provisionDockerMachine
	I1217 08:33:22.731749  893657 start.go:293] postStartSetup for "default-k8s-diff-port-225657" (driver="docker")
	I1217 08:33:22.731769  893657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:33:22.731852  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:33:22.731920  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:22.761364  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:22.876306  893657 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:33:22.881359  893657 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:33:22.881395  893657 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:33:22.881410  893657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:33:22.881482  893657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:33:22.881678  893657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:33:22.881825  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:33:22.894563  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:33:22.920348  893657 start.go:296] duration metric: took 188.5726ms for postStartSetup
	I1217 08:33:22.920449  893657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:33:22.920492  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:22.945406  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.048667  893657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:33:23.054963  893657 fix.go:56] duration metric: took 6.067856877s for fixHost
	I1217 08:33:23.054990  893657 start.go:83] releasing machines lock for "default-k8s-diff-port-225657", held for 6.067916149s
	I1217 08:33:23.055062  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:23.078512  893657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:33:23.078652  893657 ssh_runner.go:195] Run: cat /version.json
	I1217 08:33:23.078657  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:23.078715  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:23.105947  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.108771  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.290972  893657 ssh_runner.go:195] Run: systemctl --version
	I1217 08:33:23.299819  893657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:33:23.349000  893657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:33:23.357029  893657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:33:23.357106  893657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:33:23.369670  893657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 08:33:23.369700  893657 start.go:496] detecting cgroup driver to use...
	I1217 08:33:23.369789  893657 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:33:23.369842  893657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:33:23.391525  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:33:23.409286  893657 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:33:23.409355  893657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:33:23.431984  893657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:33:23.448992  893657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:33:23.545374  893657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:33:23.651657  893657 docker.go:234] disabling docker service ...
	I1217 08:33:23.651738  893657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:33:23.671894  893657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:33:23.692032  893657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:33:23.817651  893657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:33:23.939609  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:33:23.958144  893657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:33:23.979250  893657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:33:23.979317  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:23.992227  893657 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:33:23.992295  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.006950  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.020376  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.035025  893657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:33:24.046957  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.061093  893657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.074985  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.089611  893657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:33:24.101042  893657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:33:24.111709  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:24.230001  893657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:33:24.884276  893657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:33:24.884364  893657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:33:24.889824  893657 start.go:564] Will wait 60s for crictl version
	I1217 08:33:24.889930  893657 ssh_runner.go:195] Run: which crictl
	I1217 08:33:24.895473  893657 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:33:24.926169  893657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:33:24.926256  893657 ssh_runner.go:195] Run: crio --version
	I1217 08:33:24.960427  893657 ssh_runner.go:195] Run: crio --version
	I1217 08:33:24.997284  893657 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 08:33:24.999194  893657 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:33:25.022353  893657 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 08:33:25.027067  893657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:33:25.040819  893657 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:33:25.040970  893657 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:33:25.041036  893657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:33:25.078474  893657 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:33:25.078507  893657 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:33:25.078631  893657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:33:25.106774  893657 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:33:25.106807  893657 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:33:25.106818  893657 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1217 08:33:25.106948  893657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-225657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:33:25.107036  893657 ssh_runner.go:195] Run: crio config
	I1217 08:33:25.157252  893657 cni.go:84] Creating CNI manager for ""
	I1217 08:33:25.157281  893657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:25.157301  893657 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:33:25.157340  893657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-225657 NodeName:default-k8s-diff-port-225657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:33:25.157504  893657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-225657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:33:25.157619  893657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 08:33:25.166826  893657 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:33:25.166896  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:33:25.175526  893657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1217 08:33:25.190511  893657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:33:25.205768  893657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 08:33:25.223688  893657 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:33:25.229125  893657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:33:25.242599  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:25.333339  893657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:33:25.360367  893657 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657 for IP: 192.168.103.2
	I1217 08:33:25.360421  893657 certs.go:195] generating shared ca certs ...
	I1217 08:33:25.360443  893657 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:25.360645  893657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:33:25.360690  893657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:33:25.360701  893657 certs.go:257] generating profile certs ...
	I1217 08:33:25.360801  893657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key
	I1217 08:33:25.360866  893657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92
	I1217 08:33:25.360902  893657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key
	I1217 08:33:25.361012  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:33:25.361046  893657 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:33:25.361053  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:33:25.361077  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:33:25.361100  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:33:25.361123  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:33:25.361168  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:33:25.361783  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:33:25.382178  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:33:25.405095  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:33:25.426692  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:33:25.452196  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 08:33:25.472263  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:33:25.492102  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:33:25.512166  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 08:33:25.530987  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:33:25.550506  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:33:25.571554  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:33:25.591167  893657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:33:25.604816  893657 ssh_runner.go:195] Run: openssl version
	I1217 08:33:25.611390  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.620038  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:33:25.628157  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.632565  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.632630  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.668190  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:33:25.677861  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.686457  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:33:25.694766  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.698960  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.699026  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.735265  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:33:25.743914  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.752739  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:33:25.762448  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.766776  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.766841  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.804716  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:33:25.813678  893657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:33:25.818021  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 08:33:25.853937  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 08:33:25.905092  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 08:33:25.949996  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 08:33:25.998953  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 08:33:26.055041  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 08:33:26.093895  893657 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:26.093984  893657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:33:26.094037  893657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:33:26.131324  893657 cri.go:89] found id: "29406bff376a7c4d1050bad268535366dff3136cd50acef8d59f5d2cc53020a9"
	I1217 08:33:26.131350  893657 cri.go:89] found id: "f5429cbfa6cd131e89c3d06fdef6af14325ef0ea1e7bd1bdd6eb0afe6a5a0b52"
	I1217 08:33:26.131356  893657 cri.go:89] found id: "e12c965d867c6ac249f33df13a2d225cba4adb0da8040c834a0dcaba573c7610"
	I1217 08:33:26.131361  893657 cri.go:89] found id: "75f6d050456bf249fe8e7f1b9765eb60db70c90bb28d13cd7f8cf8513dba041d"
	I1217 08:33:26.131366  893657 cri.go:89] found id: ""
	I1217 08:33:26.131415  893657 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 08:33:26.144718  893657 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:26Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:33:26.144807  893657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:33:26.153957  893657 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 08:33:26.153979  893657 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 08:33:26.154032  893657 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 08:33:26.162673  893657 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:33:26.164033  893657 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-225657" does not appear in /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:26.165037  893657 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-552461/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-225657" cluster setting kubeconfig missing "default-k8s-diff-port-225657" context setting]
	I1217 08:33:26.166469  893657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.168992  893657 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 08:33:26.178665  893657 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1217 08:33:26.178709  893657 kubeadm.go:602] duration metric: took 24.72291ms to restartPrimaryControlPlane
	I1217 08:33:26.178722  893657 kubeadm.go:403] duration metric: took 84.838549ms to StartCluster
	I1217 08:33:26.178743  893657 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.178810  893657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:26.181267  893657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.181609  893657 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:33:26.181743  893657 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:33:26.181863  893657 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181869  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:26.181897  893657 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-225657"
	W1217 08:33:26.181907  893657 addons.go:248] addon storage-provisioner should already be in state true
	I1217 08:33:26.181905  893657 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181922  893657 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181933  893657 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-225657"
	I1217 08:33:26.181936  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	W1217 08:33:26.181943  893657 addons.go:248] addon dashboard should already be in state true
	I1217 08:33:26.181946  893657 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-225657"
	I1217 08:33:26.181976  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:33:26.182259  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.182470  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.182505  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.184915  893657 out.go:179] * Verifying Kubernetes components...
	I1217 08:33:26.186210  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:26.212304  893657 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 08:33:26.214226  893657 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:33:26.214980  893657 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 08:33:23.502639  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:26.006843  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:26.216388  893657 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:33:26.216412  893657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:33:26.216477  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.217466  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 08:33:26.217490  893657 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 08:33:26.217560  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.228115  893657 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-225657"
	W1217 08:33:26.228150  893657 addons.go:248] addon default-storageclass should already be in state true
	I1217 08:33:26.228184  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:33:26.228704  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.261124  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.263048  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.276039  893657 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:33:26.276071  893657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:33:26.276135  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.304101  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.360397  893657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:33:26.376999  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 08:33:26.377127  893657 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 08:33:26.380755  893657 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-225657" to be "Ready" ...
	I1217 08:33:26.392863  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 08:33:26.392899  893657 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 08:33:26.392976  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:33:26.413220  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:33:26.414384  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 08:33:26.414420  893657 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 08:33:26.434913  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 08:33:26.434938  893657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 08:33:26.451283  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 08:33:26.451316  893657 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 08:33:26.476854  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 08:33:26.476882  893657 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 08:33:26.492758  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 08:33:26.492796  893657 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 08:33:26.508872  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 08:33:26.508899  893657 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 08:33:26.524202  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 08:33:26.524232  893657 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 08:33:26.539724  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 08:33:24.789524  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:27.291456  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	I1217 08:33:27.959793  893657 node_ready.go:49] node "default-k8s-diff-port-225657" is "Ready"
	I1217 08:33:27.959838  893657 node_ready.go:38] duration metric: took 1.579048972s for node "default-k8s-diff-port-225657" to be "Ready" ...
	I1217 08:33:27.959857  893657 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:33:27.959926  893657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:33:28.524393  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.131379535s)
	I1217 08:33:28.524466  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.111215264s)
	I1217 08:33:28.524703  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.98493694s)
	I1217 08:33:28.524763  893657 api_server.go:72] duration metric: took 2.343114327s to wait for apiserver process to appear ...
	I1217 08:33:28.524791  893657 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:33:28.524815  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:28.526653  893657 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-225657 addons enable metrics-server
	
	I1217 08:33:28.530002  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:33:28.530034  893657 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:33:28.535131  893657 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1217 08:33:26.679455  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:29.179302  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:28.012078  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:30.501159  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:28.536292  893657 addons.go:530] duration metric: took 2.354557541s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 08:33:29.025630  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:29.030789  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:33:29.030828  893657 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:33:29.525077  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:29.529889  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1217 08:33:29.530993  893657 api_server.go:141] control plane version: v1.34.3
	I1217 08:33:29.531018  893657 api_server.go:131] duration metric: took 1.006217623s to wait for apiserver health ...
	I1217 08:33:29.531030  893657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:33:29.537008  893657 system_pods.go:59] 8 kube-system pods found
	I1217 08:33:29.537148  893657 system_pods.go:61] "coredns-66bc5c9577-4n72s" [b659d652-9af1-45eb-be9e-129cf428ab14] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:29.537251  893657 system_pods.go:61] "etcd-default-k8s-diff-port-225657" [dbe38dcd-09b3-4851-86f2-4fb392116d0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:29.537275  893657 system_pods.go:61] "kindnet-s5z6t" [29aebd79-e3bf-4715-b6f3-a8ea5baea1eb] Running
	I1217 08:33:29.537287  893657 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-225657" [4a366d74-1d29-4c42-a640-fba99cb73d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:29.537302  893657 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-225657" [93343562-5cd0-417d-b171-dc305e580cf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:29.537311  893657 system_pods.go:61] "kube-proxy-7lhc6" [6a163468-2bc3-4ea8-84ae-bec91b54dd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:29.537373  893657 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-225657" [585543f5-3165-4280-8c40-d86f2358b190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:29.537391  893657 system_pods.go:61] "storage-provisioner" [c3d96f21-a7d0-459b-a164-e9cc1e73add9] Running
	I1217 08:33:29.537403  893657 system_pods.go:74] duration metric: took 6.36482ms to wait for pod list to return data ...
	I1217 08:33:29.537418  893657 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:33:29.540237  893657 default_sa.go:45] found service account: "default"
	I1217 08:33:29.540261  893657 default_sa.go:55] duration metric: took 2.835186ms for default service account to be created ...
	I1217 08:33:29.540272  893657 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:33:29.547420  893657 system_pods.go:86] 8 kube-system pods found
	I1217 08:33:29.547465  893657 system_pods.go:89] "coredns-66bc5c9577-4n72s" [b659d652-9af1-45eb-be9e-129cf428ab14] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:29.547486  893657 system_pods.go:89] "etcd-default-k8s-diff-port-225657" [dbe38dcd-09b3-4851-86f2-4fb392116d0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:29.547494  893657 system_pods.go:89] "kindnet-s5z6t" [29aebd79-e3bf-4715-b6f3-a8ea5baea1eb] Running
	I1217 08:33:29.547502  893657 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-225657" [4a366d74-1d29-4c42-a640-fba99cb73d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:29.547511  893657 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-225657" [93343562-5cd0-417d-b171-dc305e580cf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:29.547519  893657 system_pods.go:89] "kube-proxy-7lhc6" [6a163468-2bc3-4ea8-84ae-bec91b54dd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:29.547526  893657 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-225657" [585543f5-3165-4280-8c40-d86f2358b190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:29.547545  893657 system_pods.go:89] "storage-provisioner" [c3d96f21-a7d0-459b-a164-e9cc1e73add9] Running
	I1217 08:33:29.547556  893657 system_pods.go:126] duration metric: took 7.275351ms to wait for k8s-apps to be running ...
	I1217 08:33:29.547565  893657 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:33:29.547621  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:29.570551  893657 system_svc.go:56] duration metric: took 22.962055ms WaitForService to wait for kubelet
	I1217 08:33:29.570588  893657 kubeadm.go:587] duration metric: took 3.388942328s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:29.570612  893657 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:33:29.573955  893657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:33:29.573987  893657 node_conditions.go:123] node cpu capacity is 8
	I1217 08:33:29.574004  893657 node_conditions.go:105] duration metric: took 3.385946ms to run NodePressure ...
	I1217 08:33:29.574016  893657 start.go:242] waiting for startup goroutines ...
	I1217 08:33:29.574023  893657 start.go:247] waiting for cluster config update ...
	I1217 08:33:29.574033  893657 start.go:256] writing updated cluster config ...
	I1217 08:33:29.574301  893657 ssh_runner.go:195] Run: rm -f paused
	I1217 08:33:29.579418  893657 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:29.583233  893657 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4n72s" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 08:33:31.590019  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:29.790012  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:32.289282  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:31.679660  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:34.180323  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:33.000662  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:34.501698  886345 pod_ready.go:94] pod "coredns-66bc5c9577-p7sqj" is "Ready"
	I1217 08:33:34.501740  886345 pod_ready.go:86] duration metric: took 31.006567227s for pod "coredns-66bc5c9577-p7sqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.504499  886345 pod_ready.go:83] waiting for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.509821  886345 pod_ready.go:94] pod "etcd-embed-certs-581631" is "Ready"
	I1217 08:33:34.509852  886345 pod_ready.go:86] duration metric: took 5.326473ms for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.512747  886345 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.518177  886345 pod_ready.go:94] pod "kube-apiserver-embed-certs-581631" is "Ready"
	I1217 08:33:34.518209  886345 pod_ready.go:86] duration metric: took 5.434504ms for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.520782  886345 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.699712  886345 pod_ready.go:94] pod "kube-controller-manager-embed-certs-581631" is "Ready"
	I1217 08:33:34.699750  886345 pod_ready.go:86] duration metric: took 178.942994ms for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.899576  886345 pod_ready.go:83] waiting for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.299641  886345 pod_ready.go:94] pod "kube-proxy-7z26t" is "Ready"
	I1217 08:33:35.299677  886345 pod_ready.go:86] duration metric: took 400.071136ms for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.499469  886345 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.898985  886345 pod_ready.go:94] pod "kube-scheduler-embed-certs-581631" is "Ready"
	I1217 08:33:35.899016  886345 pod_ready.go:86] duration metric: took 399.518108ms for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.899032  886345 pod_ready.go:40] duration metric: took 32.408536567s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:35.962165  886345 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:33:35.967810  886345 out.go:179] * Done! kubectl is now configured to use "embed-certs-581631" cluster and "default" namespace by default
	I1217 08:33:35.180035  885608 pod_ready.go:94] pod "coredns-5dd5756b68-mr99d" is "Ready"
	I1217 08:33:35.180070  885608 pod_ready.go:86] duration metric: took 33.507046133s for pod "coredns-5dd5756b68-mr99d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.183848  885608 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.189882  885608 pod_ready.go:94] pod "etcd-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.189917  885608 pod_ready.go:86] duration metric: took 6.040788ms for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.193611  885608 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.199327  885608 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.199356  885608 pod_ready.go:86] duration metric: took 5.717005ms for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.202742  885608 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.377269  885608 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.377299  885608 pod_ready.go:86] duration metric: took 174.528391ms for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.578921  885608 pod_ready.go:83] waiting for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.977275  885608 pod_ready.go:94] pod "kube-proxy-cwfwr" is "Ready"
	I1217 08:33:35.977308  885608 pod_ready.go:86] duration metric: took 398.362323ms for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.179026  885608 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.580866  885608 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-640910" is "Ready"
	I1217 08:33:36.580905  885608 pod_ready.go:86] duration metric: took 401.837858ms for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.580922  885608 pod_ready.go:40] duration metric: took 34.912908892s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:36.657518  885608 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 08:33:36.659911  885608 out.go:203] 
	W1217 08:33:36.661799  885608 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 08:33:36.663761  885608 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 08:33:36.666738  885608 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-640910" cluster and "default" namespace by default
	W1217 08:33:34.089133  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:36.092451  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:34.289870  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:36.290714  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:38.589783  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:41.088727  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:38.290930  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:40.789798  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:43.089067  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:45.089693  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:43.290645  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:45.789580  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	I1217 08:33:46.288655  890801 pod_ready.go:94] pod "coredns-7d764666f9-ssxts" is "Ready"
	I1217 08:33:46.288692  890801 pod_ready.go:86] duration metric: took 32.505014626s for pod "coredns-7d764666f9-ssxts" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.291480  890801 pod_ready.go:83] waiting for pod "etcd-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.297312  890801 pod_ready.go:94] pod "etcd-no-preload-936988" is "Ready"
	I1217 08:33:46.297340  890801 pod_ready.go:86] duration metric: took 5.835833ms for pod "etcd-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.392910  890801 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.397502  890801 pod_ready.go:94] pod "kube-apiserver-no-preload-936988" is "Ready"
	I1217 08:33:46.397547  890801 pod_ready.go:86] duration metric: took 4.609982ms for pod "kube-apiserver-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.399936  890801 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.487409  890801 pod_ready.go:94] pod "kube-controller-manager-no-preload-936988" is "Ready"
	I1217 08:33:46.487441  890801 pod_ready.go:86] duration metric: took 87.480941ms for pod "kube-controller-manager-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.687921  890801 pod_ready.go:83] waiting for pod "kube-proxy-rrz8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.087638  890801 pod_ready.go:94] pod "kube-proxy-rrz8t" is "Ready"
	I1217 08:33:47.087672  890801 pod_ready.go:86] duration metric: took 399.721259ms for pod "kube-proxy-rrz8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.287284  890801 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.687063  890801 pod_ready.go:94] pod "kube-scheduler-no-preload-936988" is "Ready"
	I1217 08:33:47.687100  890801 pod_ready.go:86] duration metric: took 399.78978ms for pod "kube-scheduler-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.687115  890801 pod_ready.go:40] duration metric: took 33.909261319s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:47.739016  890801 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 08:33:47.741018  890801 out.go:179] * Done! kubectl is now configured to use "no-preload-936988" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 08:33:20 embed-certs-581631 crio[567]: time="2025-12-17T08:33:20.095718373Z" level=info msg="Started container" PID=1744 containerID=77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper id=f5417407-8ba2-456d-ac7e-3aa5a2de567e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e82ab4c77900e6f286c12781fb0f947a458a8a1fe71bcb0c976013c9ecc253b7
	Dec 17 08:33:20 embed-certs-581631 crio[567]: time="2025-12-17T08:33:20.254926424Z" level=info msg="Removing container: 4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c" id=23f3393f-fa4c-4670-8acd-5fd86be17dc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:20 embed-certs-581631 crio[567]: time="2025-12-17T08:33:20.268035663Z" level=info msg="Removed container 4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper" id=23f3393f-fa4c-4670-8acd-5fd86be17dc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.296363477Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8f3b3f0e-1104-46d7-a82b-141a0ba25785 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.297337996Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1ebf7380-defb-49a5-8fb5-8625048c0c27 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.29847711Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1eda59a7-1e71-49ce-ad9c-f18146dba4fe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.29867976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.30333192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.303526806Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f63ab81c7fde832d05a6398f4c71dbff192ebae776d83d0f17808565dfd0e25d/merged/etc/passwd: no such file or directory"
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.303584818Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f63ab81c7fde832d05a6398f4c71dbff192ebae776d83d0f17808565dfd0e25d/merged/etc/group: no such file or directory"
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.303963504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.331715854Z" level=info msg="Created container 10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949: kube-system/storage-provisioner/storage-provisioner" id=1eda59a7-1e71-49ce-ad9c-f18146dba4fe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.332435653Z" level=info msg="Starting container: 10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949" id=130666be-c045-48d5-b34a-45a60d7362f7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.334826433Z" level=info msg="Started container" PID=1758 containerID=10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949 description=kube-system/storage-provisioner/storage-provisioner id=130666be-c045-48d5-b34a-45a60d7362f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c1f33f58ce9c201637c7587ab57c9845ea471636ab5b4de3b09be12f994c2ba5
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.14373502Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=81d1ff9a-6991-4259-87db-4e3d3b07c86d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.14470997Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=853d4788-7a05-4863-8912-da27897825c2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.145804644Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper" id=22488fd4-c8aa-4af2-b920-8eca4dba1dbc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.145944146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.151156222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.151663803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.186001726Z" level=info msg="Created container 6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper" id=22488fd4-c8aa-4af2-b920-8eca4dba1dbc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.186721822Z" level=info msg="Starting container: 6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187" id=67111937-74d5-46ae-9aed-dd8439a7829d name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.188616191Z" level=info msg="Started container" PID=1794 containerID=6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper id=67111937-74d5-46ae-9aed-dd8439a7829d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e82ab4c77900e6f286c12781fb0f947a458a8a1fe71bcb0c976013c9ecc253b7
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.327631139Z" level=info msg="Removing container: 77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26" id=23823168-5dae-481d-b61b-a1312b1f99e1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.339210285Z" level=info msg="Removed container 77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper" id=23823168-5dae-481d-b61b-a1312b1f99e1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6a158a46500d5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   e82ab4c77900e       dashboard-metrics-scraper-6ffb444bf9-g6mkz   kubernetes-dashboard
	10bee23037121       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   c1f33f58ce9c2       storage-provisioner                          kube-system
	477f92307f3a9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   649896cf3187f       kubernetes-dashboard-855c9754f9-xhcfw        kubernetes-dashboard
	74336b447d076       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   ccd13e5699138       coredns-66bc5c9577-p7sqj                     kube-system
	2251d699b8cf5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   abfce39a06c67       busybox                                      default
	8c9c5cb7d5f07       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           48 seconds ago      Running             kube-proxy                  0                   113e530d3a016       kube-proxy-7z26t                             kube-system
	9f7ac5f3d2a15       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           48 seconds ago      Running             kindnet-cni                 0                   63bfca7af1a55       kindnet-wv7n7                                kube-system
	7ad4a12142571       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   c1f33f58ce9c2       storage-provisioner                          kube-system
	79831ec89cc5a       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           52 seconds ago      Running             kube-apiserver              0                   de73cd7367812       kube-apiserver-embed-certs-581631            kube-system
	c329f979a08a4       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           52 seconds ago      Running             kube-controller-manager     0                   e3708cff3899b       kube-controller-manager-embed-certs-581631   kube-system
	f79f6823e0e24       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           52 seconds ago      Running             kube-scheduler              0                   8a55dba3db917       kube-scheduler-embed-certs-581631            kube-system
	7d3db7fd1bb8c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   e9ef0265e1866       etcd-embed-certs-581631                      kube-system
	
	
	==> coredns [74336b447d076eb6601b50d8a5bbc837099ef6eee8d219659ec500edaf3ae63a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58146 - 23582 "HINFO IN 5569509536565776757.7420217286395446187. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033614361s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-581631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-581631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=embed-certs-581631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_32_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:31:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-581631
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:33:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:33:32 +0000   Wed, 17 Dec 2025 08:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:33:32 +0000   Wed, 17 Dec 2025 08:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:33:32 +0000   Wed, 17 Dec 2025 08:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:33:32 +0000   Wed, 17 Dec 2025 08:32:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-581631
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                54d7a8c2-691a-45c0-b4a2-f9840ad8416b
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-p7sqj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-581631                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-wv7n7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-581631             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-581631    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-7z26t                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-581631             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g6mkz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xhcfw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node embed-certs-581631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node embed-certs-581631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node embed-certs-581631 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node embed-certs-581631 event: Registered Node embed-certs-581631 in Controller
	  Normal  NodeReady                91s                kubelet          Node embed-certs-581631 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node embed-certs-581631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node embed-certs-581631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)  kubelet          Node embed-certs-581631 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node embed-certs-581631 event: Registered Node embed-certs-581631 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [7d3db7fd1bb8c12d886bfea8c0b0731baaa3804b175ca5e69fc930ef2c9c3881] <==
	{"level":"warn","ts":"2025-12-17T08:33:01.110451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.120267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.136142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.142447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.151844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.159446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.167362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.176743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.185066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.194254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.203030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.213077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.222186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.231223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.240183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.248675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.255744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.263658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.276667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.286201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.294624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.319316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.327128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.335429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.408932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49970","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:33:51 up  2:16,  0 user,  load average: 5.46, 4.26, 2.92
	Linux embed-certs-581631 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f7ac5f3d2a15d2684611317f67a3c2210f9c68172e2cf5714548c2f9bd54ef3] <==
	I1217 08:33:02.760793       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:33:02.761169       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 08:33:02.761578       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:33:02.761663       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:33:02.761689       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:33:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:33:03.060672       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:33:03.158057       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:33:03.158203       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:33:03.158602       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:33:03.458692       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:33:03.458741       1 metrics.go:72] Registering metrics
	I1217 08:33:03.458876       1 controller.go:711] "Syncing nftables rules"
	I1217 08:33:13.060653       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:33:13.060833       1 main.go:301] handling current node
	I1217 08:33:23.061671       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:33:23.061715       1 main.go:301] handling current node
	I1217 08:33:33.060856       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:33:33.060893       1 main.go:301] handling current node
	I1217 08:33:43.060598       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:33:43.060637       1 main.go:301] handling current node
	
	
	==> kube-apiserver [79831ec89cc5a55b9420cabfb5188263a179325c1b809e4f1b11f241ca39131c] <==
	I1217 08:33:01.914752       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:33:01.914957       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 08:33:01.915225       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 08:33:01.915384       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 08:33:01.915434       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 08:33:01.920956       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 08:33:01.928237       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 08:33:01.928277       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 08:33:01.928242       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 08:33:01.928571       1 aggregator.go:171] initial CRD sync complete...
	I1217 08:33:01.928586       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 08:33:01.928593       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 08:33:01.928600       1 cache.go:39] Caches are synced for autoregister controller
	I1217 08:33:01.961947       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:33:02.171244       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:33:02.211394       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:33:02.260942       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:33:02.286907       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:33:02.297812       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:33:02.352332       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.142.81"}
	I1217 08:33:02.364513       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.85.160"}
	I1217 08:33:02.822268       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:33:05.534102       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:33:05.787911       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:33:05.837077       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c329f979a08a46f2a0e41d1fd5c750409c27319d839acebb55967a6c5075748c] <==
	I1217 08:33:05.241634       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 08:33:05.241647       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 08:33:05.241655       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 08:33:05.244740       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 08:33:05.247946       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 08:33:05.251279       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 08:33:05.280607       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:33:05.280655       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 08:33:05.280703       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 08:33:05.280707       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 08:33:05.280733       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 08:33:05.280768       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 08:33:05.280779       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 08:33:05.280781       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 08:33:05.281014       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 08:33:05.287231       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:33:05.287235       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 08:33:05.288337       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 08:33:05.288406       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:33:05.292604       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 08:33:05.294857       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 08:33:05.297109       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 08:33:05.301698       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 08:33:05.303983       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 08:33:05.304188       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8c9c5cb7d5f07608866c6a6069eb19c6ff6912f829c84b1d6106b6f37984966b] <==
	I1217 08:33:02.542569       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:33:02.619967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:33:02.720570       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:33:02.720615       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 08:33:02.720712       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:33:02.747274       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:33:02.747360       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:33:02.753469       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:33:02.754035       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:33:02.754070       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:02.755745       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:33:02.755822       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:33:02.755856       1 config.go:200] "Starting service config controller"
	I1217 08:33:02.755862       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:33:02.755878       1 config.go:309] "Starting node config controller"
	I1217 08:33:02.755884       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:33:02.755999       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:33:02.756013       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:33:02.856296       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:33:02.856313       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:33:02.856351       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:33:02.856483       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f79f6823e0e24a3f6a2e174ad04b8359e9c8d6e4bc205fed1c8b015b611cd6d2] <==
	I1217 08:32:59.388358       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:33:01.830654       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:33:01.830700       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:33:01.830713       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:33:01.830723       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:33:01.867919       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 08:33:01.867956       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:01.873050       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:01.873091       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:01.874253       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:33:01.874713       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:33:01.974231       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:33:05 embed-certs-581631 kubelet[716]: I1217 08:33:05.990795     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ffd58912-6167-405e-9faa-0ee529b840b9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-g6mkz\" (UID: \"ffd58912-6167-405e-9faa-0ee529b840b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz"
	Dec 17 08:33:05 embed-certs-581631 kubelet[716]: I1217 08:33:05.990842     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccg7\" (UniqueName: \"kubernetes.io/projected/bfd63338-0d19-477c-95f5-82e2f47d96e4-kube-api-access-rccg7\") pod \"kubernetes-dashboard-855c9754f9-xhcfw\" (UID: \"bfd63338-0d19-477c-95f5-82e2f47d96e4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhcfw"
	Dec 17 08:33:05 embed-certs-581631 kubelet[716]: I1217 08:33:05.990884     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpvx5\" (UniqueName: \"kubernetes.io/projected/ffd58912-6167-405e-9faa-0ee529b840b9-kube-api-access-gpvx5\") pod \"dashboard-metrics-scraper-6ffb444bf9-g6mkz\" (UID: \"ffd58912-6167-405e-9faa-0ee529b840b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz"
	Dec 17 08:33:09 embed-certs-581631 kubelet[716]: I1217 08:33:09.210635     716 scope.go:117] "RemoveContainer" containerID="b5911230dc3d9b821b169c61bbfaad8d3e01127f5ec3985d471c2abd6530636c"
	Dec 17 08:33:10 embed-certs-581631 kubelet[716]: I1217 08:33:10.216608     716 scope.go:117] "RemoveContainer" containerID="b5911230dc3d9b821b169c61bbfaad8d3e01127f5ec3985d471c2abd6530636c"
	Dec 17 08:33:10 embed-certs-581631 kubelet[716]: I1217 08:33:10.216921     716 scope.go:117] "RemoveContainer" containerID="4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c"
	Dec 17 08:33:10 embed-certs-581631 kubelet[716]: E1217 08:33:10.217171     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6mkz_kubernetes-dashboard(ffd58912-6167-405e-9faa-0ee529b840b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz" podUID="ffd58912-6167-405e-9faa-0ee529b840b9"
	Dec 17 08:33:11 embed-certs-581631 kubelet[716]: I1217 08:33:11.223607     716 scope.go:117] "RemoveContainer" containerID="4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c"
	Dec 17 08:33:11 embed-certs-581631 kubelet[716]: E1217 08:33:11.223836     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6mkz_kubernetes-dashboard(ffd58912-6167-405e-9faa-0ee529b840b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz" podUID="ffd58912-6167-405e-9faa-0ee529b840b9"
	Dec 17 08:33:14 embed-certs-581631 kubelet[716]: I1217 08:33:14.244108     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhcfw" podStartSLOduration=2.243170727 podStartE2EDuration="9.244085903s" podCreationTimestamp="2025-12-17 08:33:05 +0000 UTC" firstStartedPulling="2025-12-17 08:33:06.258140788 +0000 UTC m=+8.231632807" lastFinishedPulling="2025-12-17 08:33:13.259055955 +0000 UTC m=+15.232547983" observedRunningTime="2025-12-17 08:33:14.244026797 +0000 UTC m=+16.217518851" watchObservedRunningTime="2025-12-17 08:33:14.244085903 +0000 UTC m=+16.217577943"
	Dec 17 08:33:20 embed-certs-581631 kubelet[716]: I1217 08:33:20.027678     716 scope.go:117] "RemoveContainer" containerID="4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c"
	Dec 17 08:33:20 embed-certs-581631 kubelet[716]: I1217 08:33:20.253223     716 scope.go:117] "RemoveContainer" containerID="4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c"
	Dec 17 08:33:20 embed-certs-581631 kubelet[716]: I1217 08:33:20.253460     716 scope.go:117] "RemoveContainer" containerID="77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26"
	Dec 17 08:33:20 embed-certs-581631 kubelet[716]: E1217 08:33:20.253716     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6mkz_kubernetes-dashboard(ffd58912-6167-405e-9faa-0ee529b840b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz" podUID="ffd58912-6167-405e-9faa-0ee529b840b9"
	Dec 17 08:33:30 embed-certs-581631 kubelet[716]: I1217 08:33:30.027953     716 scope.go:117] "RemoveContainer" containerID="77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26"
	Dec 17 08:33:30 embed-certs-581631 kubelet[716]: E1217 08:33:30.028260     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6mkz_kubernetes-dashboard(ffd58912-6167-405e-9faa-0ee529b840b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz" podUID="ffd58912-6167-405e-9faa-0ee529b840b9"
	Dec 17 08:33:33 embed-certs-581631 kubelet[716]: I1217 08:33:33.295944     716 scope.go:117] "RemoveContainer" containerID="7ad4a121425718b62d24f7786ab1468fe9ccc850850ce4710f25d5a6e5b5f9e0"
	Dec 17 08:33:42 embed-certs-581631 kubelet[716]: I1217 08:33:42.143144     716 scope.go:117] "RemoveContainer" containerID="77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26"
	Dec 17 08:33:42 embed-certs-581631 kubelet[716]: I1217 08:33:42.326137     716 scope.go:117] "RemoveContainer" containerID="77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26"
	Dec 17 08:33:42 embed-certs-581631 kubelet[716]: I1217 08:33:42.326409     716 scope.go:117] "RemoveContainer" containerID="6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187"
	Dec 17 08:33:42 embed-certs-581631 kubelet[716]: E1217 08:33:42.326664     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6mkz_kubernetes-dashboard(ffd58912-6167-405e-9faa-0ee529b840b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz" podUID="ffd58912-6167-405e-9faa-0ee529b840b9"
	Dec 17 08:33:48 embed-certs-581631 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:33:48 embed-certs-581631 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:33:48 embed-certs-581631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 08:33:48 embed-certs-581631 systemd[1]: kubelet.service: Consumed 1.921s CPU time.
	
	
	==> kubernetes-dashboard [477f92307f3a906d587ee7835d84c2a4746df3a62118fdf285b9c4f1f4af8391] <==
	2025/12/17 08:33:13 Starting overwatch
	2025/12/17 08:33:13 Using namespace: kubernetes-dashboard
	2025/12/17 08:33:13 Using in-cluster config to connect to apiserver
	2025/12/17 08:33:13 Using secret token for csrf signing
	2025/12/17 08:33:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 08:33:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 08:33:13 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 08:33:13 Generating JWE encryption key
	2025/12/17 08:33:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 08:33:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 08:33:13 Initializing JWE encryption key from synchronized object
	2025/12/17 08:33:13 Creating in-cluster Sidecar client
	2025/12/17 08:33:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:13 Serving insecurely on HTTP port: 9090
	2025/12/17 08:33:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949] <==
	I1217 08:33:33.349164       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:33:33.357263       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:33:33.357301       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:33:33.360078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:36.816580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:41.076959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:44.675737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:47.730156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:50.752638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:50.757447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:33:50.757620       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:33:50.757755       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79bb53cc-560e-4cfd-b5ff-3872574557fe", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-581631_36bf96ef-f8ea-41d4-8788-3b86608bf5c3 became leader
	I1217 08:33:50.757804       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-581631_36bf96ef-f8ea-41d4-8788-3b86608bf5c3!
	W1217 08:33:50.759976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:50.765300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:33:50.858667       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-581631_36bf96ef-f8ea-41d4-8788-3b86608bf5c3!
	
	
	==> storage-provisioner [7ad4a121425718b62d24f7786ab1468fe9ccc850850ce4710f25d5a6e5b5f9e0] <==
	I1217 08:33:02.502281       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 08:33:32.508047       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-581631 -n embed-certs-581631
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-581631 -n embed-certs-581631: exit status 2 (379.250678ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-581631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-581631
helpers_test.go:244: (dbg) docker inspect embed-certs-581631:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3",
	        "Created": "2025-12-17T08:31:39.009229822Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 886545,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:32:51.456867196Z",
	            "FinishedAt": "2025-12-17T08:32:50.58064402Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/hosts",
	        "LogPath": "/var/lib/docker/containers/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3/ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3-json.log",
	        "Name": "/embed-certs-581631",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-581631:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-581631",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce9b768a52503ef7bfac17363443ca317322692df9e126e6d0e0b875d57d30e3",
	                "LowerDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/533101df77c6bbb04a1bb6d7d444b24827285e37cec12da4dbca0d8e8b7c3a8e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-581631",
	                "Source": "/var/lib/docker/volumes/embed-certs-581631/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-581631",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-581631",
	                "name.minikube.sigs.k8s.io": "embed-certs-581631",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3ec15b47bb4633a2d5a3df2b7cc2f62f40d86614a7fce9f92b4d079b7a1d742b",
	            "SandboxKey": "/var/run/docker/netns/3ec15b47bb46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-581631": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1180462b720da0ae1fa73d0b014c57b2b6955441a1e7b7b4a2e5db28ef5abec",
	                    "EndpointID": "03e8514c473f7c25c4f20cd997a9f73f6e542a82f819df14d3939b1919c11702",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "da:a6:f0:52:b2:05",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-581631",
	                        "ce9b768a5250"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-581631 -n embed-certs-581631
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-581631 -n embed-certs-581631: exit status 2 (374.999948ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-581631 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-581631 logs -n 25: (1.273013585s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-055130 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo crio config                                                                                                                                                                                                             │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ delete  │ -p bridge-055130                                                                                                                                                                                                                              │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-606497                                                                                                                                                                                                               │ disable-driver-mounts-606497 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-581631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p old-k8s-version-640910 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ stop    │ -p embed-certs-581631 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p no-preload-936988 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-640910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225657 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p no-preload-936988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                   │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                               │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:33:16
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:33:16.728946  893657 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:16.729265  893657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:16.729278  893657 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:16.729285  893657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:16.729634  893657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:16.730240  893657 out.go:368] Setting JSON to false
	I1217 08:33:16.732006  893657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8142,"bootTime":1765952255,"procs":367,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:33:16.732103  893657 start.go:143] virtualization: kvm guest
	I1217 08:33:16.736563  893657 out.go:179] * [default-k8s-diff-port-225657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:33:16.738941  893657 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:33:16.738995  893657 notify.go:221] Checking for updates...
	I1217 08:33:16.742759  893657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:33:16.746850  893657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:16.748597  893657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:33:16.750659  893657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:33:16.753168  893657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:33:16.756488  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:16.757459  893657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:33:16.792888  893657 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:33:16.793019  893657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:16.867744  893657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 08:33:16.854455776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:16.867913  893657 docker.go:319] overlay module found
	I1217 08:33:16.871013  893657 out.go:179] * Using the docker driver based on existing profile
	I1217 08:33:16.873307  893657 start.go:309] selected driver: docker
	I1217 08:33:16.873331  893657 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:16.873487  893657 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:33:16.874376  893657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:16.951072  893657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 08:33:16.935361077 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:16.951510  893657 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:16.951573  893657 cni.go:84] Creating CNI manager for ""
	I1217 08:33:16.951645  893657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:16.951709  893657 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:16.956015  893657 out.go:179] * Starting "default-k8s-diff-port-225657" primary control-plane node in "default-k8s-diff-port-225657" cluster
	I1217 08:33:16.957479  893657 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:33:16.959060  893657 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:33:16.960240  893657 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:33:16.960283  893657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 08:33:16.960307  893657 cache.go:65] Caching tarball of preloaded images
	I1217 08:33:16.960329  893657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:33:16.960440  893657 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:33:16.960458  893657 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 08:33:16.960662  893657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:33:16.986877  893657 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:33:16.986906  893657 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:33:16.986928  893657 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:33:16.986979  893657 start.go:360] acquireMachinesLock for default-k8s-diff-port-225657: {Name:mkf524609fef75b896bc809c6c5673b68f778ced Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:33:16.987060  893657 start.go:364] duration metric: took 53.96µs to acquireMachinesLock for "default-k8s-diff-port-225657"
	I1217 08:33:16.987092  893657 start.go:96] Skipping create...Using existing machine configuration
	I1217 08:33:16.987100  893657 fix.go:54] fixHost starting: 
	I1217 08:33:16.987446  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:17.012833  893657 fix.go:112] recreateIfNeeded on default-k8s-diff-port-225657: state=Stopped err=<nil>
	W1217 08:33:17.012874  893657 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 08:33:13.213803  890801 addons.go:530] duration metric: took 2.18918486s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 08:33:13.705622  890801 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 08:33:13.710080  890801 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 08:33:13.711221  890801 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 08:33:13.711249  890801 api_server.go:131] duration metric: took 506.788041ms to wait for apiserver health ...
	I1217 08:33:13.711258  890801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:33:13.715494  890801 system_pods.go:59] 8 kube-system pods found
	I1217 08:33:13.715559  890801 system_pods.go:61] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:13.715571  890801 system_pods.go:61] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:13.715580  890801 system_pods.go:61] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:33:13.715587  890801 system_pods.go:61] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:13.715598  890801 system_pods.go:61] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:13.715604  890801 system_pods.go:61] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:13.715610  890801 system_pods.go:61] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:13.715617  890801 system_pods.go:61] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:33:13.715626  890801 system_pods.go:74] duration metric: took 4.361363ms to wait for pod list to return data ...
	I1217 08:33:13.715639  890801 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:33:13.718438  890801 default_sa.go:45] found service account: "default"
	I1217 08:33:13.718465  890801 default_sa.go:55] duration metric: took 2.817296ms for default service account to be created ...
	I1217 08:33:13.718477  890801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:33:13.722138  890801 system_pods.go:86] 8 kube-system pods found
	I1217 08:33:13.722180  890801 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:13.722194  890801 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:13.722204  890801 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:33:13.722214  890801 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:13.722223  890801 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:13.722234  890801 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:13.722243  890801 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:13.722259  890801 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:33:13.722272  890801 system_pods.go:126] duration metric: took 3.785279ms to wait for k8s-apps to be running ...
	I1217 08:33:13.722289  890801 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:33:13.722352  890801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:13.737774  890801 system_svc.go:56] duration metric: took 15.474847ms WaitForService to wait for kubelet
	I1217 08:33:13.737805  890801 kubeadm.go:587] duration metric: took 2.713427844s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:13.737833  890801 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:33:13.772714  890801 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:33:13.772756  890801 node_conditions.go:123] node cpu capacity is 8
	I1217 08:33:13.772775  890801 node_conditions.go:105] duration metric: took 34.937186ms to run NodePressure ...
	I1217 08:33:13.772792  890801 start.go:242] waiting for startup goroutines ...
	I1217 08:33:13.772803  890801 start.go:247] waiting for cluster config update ...
	I1217 08:33:13.772825  890801 start.go:256] writing updated cluster config ...
	I1217 08:33:13.773173  890801 ssh_runner.go:195] Run: rm -f paused
	I1217 08:33:13.777812  890801 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:13.783637  890801 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ssxts" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 08:33:15.868337  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:15.181390  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:17.182344  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:19.681167  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:17.003119  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:19.003173  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:21.003325  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:17.017254  893657 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-225657" ...
	I1217 08:33:17.017346  893657 cli_runner.go:164] Run: docker start default-k8s-diff-port-225657
	I1217 08:33:17.373663  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:17.400760  893657 kic.go:432] container "default-k8s-diff-port-225657" state is running.
	I1217 08:33:17.401442  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:17.429446  893657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:33:17.429718  893657 machine.go:94] provisionDockerMachine start ...
	I1217 08:33:17.429809  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:17.458096  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:17.458238  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:17.458254  893657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:33:17.459170  893657 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48512->127.0.0.1:33530: read: connection reset by peer
	I1217 08:33:20.612283  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:33:20.612308  893657 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-225657"
	I1217 08:33:20.612373  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:20.636332  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:20.636502  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:20.636519  893657 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-225657 && echo "default-k8s-diff-port-225657" | sudo tee /etc/hostname
	I1217 08:33:20.804510  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:33:20.804742  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:20.834923  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:20.835091  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:20.835140  893657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-225657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-225657/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-225657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:33:20.984217  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:33:20.984254  893657 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:33:20.984307  893657 ubuntu.go:190] setting up certificates
	I1217 08:33:20.984330  893657 provision.go:84] configureAuth start
	I1217 08:33:20.984434  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:21.010705  893657 provision.go:143] copyHostCerts
	I1217 08:33:21.010798  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:33:21.010816  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:33:21.010896  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:33:21.011010  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:33:21.011024  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:33:21.011068  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:33:21.011154  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:33:21.011165  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:33:21.011204  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:33:21.011353  893657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-225657 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-225657 localhost minikube]
	I1217 08:33:21.094979  893657 provision.go:177] copyRemoteCerts
	I1217 08:33:21.095063  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:33:21.095123  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:21.119755  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:21.226499  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:33:21.252430  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 08:33:21.276413  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 08:33:21.304875  893657 provision.go:87] duration metric: took 320.523082ms to configureAuth
	I1217 08:33:21.304910  893657 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:33:21.305140  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:21.305286  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:21.329333  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:21.329469  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:21.329488  893657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1217 08:33:18.289602  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:20.292744  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:22.296974  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:21.764845  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:24.179988  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	I1217 08:33:22.731689  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:33:22.731722  893657 machine.go:97] duration metric: took 5.301986136s to provisionDockerMachine
	I1217 08:33:22.731749  893657 start.go:293] postStartSetup for "default-k8s-diff-port-225657" (driver="docker")
	I1217 08:33:22.731769  893657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:33:22.731852  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:33:22.731920  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:22.761364  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:22.876306  893657 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:33:22.881359  893657 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:33:22.881395  893657 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:33:22.881410  893657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:33:22.881482  893657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:33:22.881678  893657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:33:22.881825  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:33:22.894563  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:33:22.920348  893657 start.go:296] duration metric: took 188.5726ms for postStartSetup
	I1217 08:33:22.920449  893657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:33:22.920492  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:22.945406  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.048667  893657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:33:23.054963  893657 fix.go:56] duration metric: took 6.067856877s for fixHost
	I1217 08:33:23.054990  893657 start.go:83] releasing machines lock for "default-k8s-diff-port-225657", held for 6.067916149s
	I1217 08:33:23.055062  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:23.078512  893657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:33:23.078652  893657 ssh_runner.go:195] Run: cat /version.json
	I1217 08:33:23.078657  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:23.078715  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:23.105947  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.108771  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.290972  893657 ssh_runner.go:195] Run: systemctl --version
	I1217 08:33:23.299819  893657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:33:23.349000  893657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:33:23.357029  893657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:33:23.357106  893657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:33:23.369670  893657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 08:33:23.369700  893657 start.go:496] detecting cgroup driver to use...
	I1217 08:33:23.369789  893657 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:33:23.369842  893657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:33:23.391525  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:33:23.409286  893657 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:33:23.409355  893657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:33:23.431984  893657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:33:23.448992  893657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:33:23.545374  893657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:33:23.651657  893657 docker.go:234] disabling docker service ...
	I1217 08:33:23.651738  893657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:33:23.671894  893657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:33:23.692032  893657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:33:23.817651  893657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:33:23.939609  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:33:23.958144  893657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:33:23.979250  893657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:33:23.979317  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:23.992227  893657 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:33:23.992295  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.006950  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.020376  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.035025  893657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:33:24.046957  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.061093  893657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.074985  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.089611  893657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:33:24.101042  893657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:33:24.111709  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:24.230001  893657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:33:24.884276  893657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:33:24.884364  893657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:33:24.889824  893657 start.go:564] Will wait 60s for crictl version
	I1217 08:33:24.889930  893657 ssh_runner.go:195] Run: which crictl
	I1217 08:33:24.895473  893657 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:33:24.926169  893657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:33:24.926256  893657 ssh_runner.go:195] Run: crio --version
	I1217 08:33:24.960427  893657 ssh_runner.go:195] Run: crio --version
	I1217 08:33:24.997284  893657 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 08:33:24.999194  893657 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:33:25.022353  893657 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 08:33:25.027067  893657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:33:25.040819  893657 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:33:25.040970  893657 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:33:25.041036  893657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:33:25.078474  893657 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:33:25.078507  893657 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:33:25.078631  893657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:33:25.106774  893657 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:33:25.106807  893657 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:33:25.106818  893657 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1217 08:33:25.106948  893657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-225657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:33:25.107036  893657 ssh_runner.go:195] Run: crio config
	I1217 08:33:25.157252  893657 cni.go:84] Creating CNI manager for ""
	I1217 08:33:25.157281  893657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:25.157301  893657 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:33:25.157340  893657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-225657 NodeName:default-k8s-diff-port-225657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:33:25.157504  893657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-225657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:33:25.157619  893657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 08:33:25.166826  893657 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:33:25.166896  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:33:25.175526  893657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1217 08:33:25.190511  893657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:33:25.205768  893657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 08:33:25.223688  893657 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:33:25.229125  893657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:33:25.242599  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:25.333339  893657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:33:25.360367  893657 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657 for IP: 192.168.103.2
	I1217 08:33:25.360421  893657 certs.go:195] generating shared ca certs ...
	I1217 08:33:25.360443  893657 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:25.360645  893657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:33:25.360690  893657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:33:25.360701  893657 certs.go:257] generating profile certs ...
	I1217 08:33:25.360801  893657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key
	I1217 08:33:25.360866  893657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92
	I1217 08:33:25.360902  893657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key
	I1217 08:33:25.361012  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:33:25.361046  893657 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:33:25.361053  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:33:25.361077  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:33:25.361100  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:33:25.361123  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:33:25.361168  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:33:25.361783  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:33:25.382178  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:33:25.405095  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:33:25.426692  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:33:25.452196  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 08:33:25.472263  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:33:25.492102  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:33:25.512166  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 08:33:25.530987  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:33:25.550506  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:33:25.571554  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:33:25.591167  893657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:33:25.604816  893657 ssh_runner.go:195] Run: openssl version
	I1217 08:33:25.611390  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.620038  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:33:25.628157  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.632565  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.632630  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.668190  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:33:25.677861  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.686457  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:33:25.694766  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.698960  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.699026  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.735265  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:33:25.743914  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.752739  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:33:25.762448  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.766776  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.766841  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.804716  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:33:25.813678  893657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:33:25.818021  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 08:33:25.853937  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 08:33:25.905092  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 08:33:25.949996  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 08:33:25.998953  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 08:33:26.055041  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 08:33:26.093895  893657 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:26.093984  893657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:33:26.094037  893657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:33:26.131324  893657 cri.go:89] found id: "29406bff376a7c4d1050bad268535366dff3136cd50acef8d59f5d2cc53020a9"
	I1217 08:33:26.131350  893657 cri.go:89] found id: "f5429cbfa6cd131e89c3d06fdef6af14325ef0ea1e7bd1bdd6eb0afe6a5a0b52"
	I1217 08:33:26.131356  893657 cri.go:89] found id: "e12c965d867c6ac249f33df13a2d225cba4adb0da8040c834a0dcaba573c7610"
	I1217 08:33:26.131361  893657 cri.go:89] found id: "75f6d050456bf249fe8e7f1b9765eb60db70c90bb28d13cd7f8cf8513dba041d"
	I1217 08:33:26.131366  893657 cri.go:89] found id: ""
	I1217 08:33:26.131415  893657 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 08:33:26.144718  893657 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:26Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:33:26.144807  893657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:33:26.153957  893657 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 08:33:26.153979  893657 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 08:33:26.154032  893657 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 08:33:26.162673  893657 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:33:26.164033  893657 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-225657" does not appear in /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:26.165037  893657 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-552461/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-225657" cluster setting kubeconfig missing "default-k8s-diff-port-225657" context setting]
	I1217 08:33:26.166469  893657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.168992  893657 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 08:33:26.178665  893657 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1217 08:33:26.178709  893657 kubeadm.go:602] duration metric: took 24.72291ms to restartPrimaryControlPlane
	I1217 08:33:26.178722  893657 kubeadm.go:403] duration metric: took 84.838549ms to StartCluster
	I1217 08:33:26.178743  893657 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.178810  893657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:26.181267  893657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.181609  893657 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:33:26.181743  893657 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:33:26.181863  893657 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181869  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:26.181897  893657 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-225657"
	W1217 08:33:26.181907  893657 addons.go:248] addon storage-provisioner should already be in state true
	I1217 08:33:26.181905  893657 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181922  893657 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181933  893657 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-225657"
	I1217 08:33:26.181936  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	W1217 08:33:26.181943  893657 addons.go:248] addon dashboard should already be in state true
	I1217 08:33:26.181946  893657 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-225657"
	I1217 08:33:26.181976  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:33:26.182259  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.182470  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.182505  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.184915  893657 out.go:179] * Verifying Kubernetes components...
	I1217 08:33:26.186210  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:26.212304  893657 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 08:33:26.214226  893657 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:33:26.214980  893657 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 08:33:23.502639  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:26.006843  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:26.216388  893657 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:33:26.216412  893657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:33:26.216477  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.217466  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 08:33:26.217490  893657 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 08:33:26.217560  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.228115  893657 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-225657"
	W1217 08:33:26.228150  893657 addons.go:248] addon default-storageclass should already be in state true
	I1217 08:33:26.228184  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:33:26.228704  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.261124  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.263048  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.276039  893657 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:33:26.276071  893657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:33:26.276135  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.304101  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.360397  893657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:33:26.376999  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 08:33:26.377127  893657 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 08:33:26.380755  893657 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-225657" to be "Ready" ...
	I1217 08:33:26.392863  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 08:33:26.392899  893657 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 08:33:26.392976  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:33:26.413220  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:33:26.414384  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 08:33:26.414420  893657 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 08:33:26.434913  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 08:33:26.434938  893657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 08:33:26.451283  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 08:33:26.451316  893657 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 08:33:26.476854  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 08:33:26.476882  893657 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 08:33:26.492758  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 08:33:26.492796  893657 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 08:33:26.508872  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 08:33:26.508899  893657 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 08:33:26.524202  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 08:33:26.524232  893657 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 08:33:26.539724  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 08:33:24.789524  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:27.291456  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	I1217 08:33:27.959793  893657 node_ready.go:49] node "default-k8s-diff-port-225657" is "Ready"
	I1217 08:33:27.959838  893657 node_ready.go:38] duration metric: took 1.579048972s for node "default-k8s-diff-port-225657" to be "Ready" ...
	I1217 08:33:27.959857  893657 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:33:27.959926  893657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:33:28.524393  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.131379535s)
	I1217 08:33:28.524466  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.111215264s)
	I1217 08:33:28.524703  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.98493694s)
	I1217 08:33:28.524763  893657 api_server.go:72] duration metric: took 2.343114327s to wait for apiserver process to appear ...
	I1217 08:33:28.524791  893657 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:33:28.524815  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:28.526653  893657 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-225657 addons enable metrics-server
	
	I1217 08:33:28.530002  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:33:28.530034  893657 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:33:28.535131  893657 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1217 08:33:26.679455  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:29.179302  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:28.012078  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:30.501159  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:28.536292  893657 addons.go:530] duration metric: took 2.354557541s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 08:33:29.025630  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:29.030789  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:33:29.030828  893657 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:33:29.525077  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:29.529889  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1217 08:33:29.530993  893657 api_server.go:141] control plane version: v1.34.3
	I1217 08:33:29.531018  893657 api_server.go:131] duration metric: took 1.006217623s to wait for apiserver health ...
	I1217 08:33:29.531030  893657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:33:29.537008  893657 system_pods.go:59] 8 kube-system pods found
	I1217 08:33:29.537148  893657 system_pods.go:61] "coredns-66bc5c9577-4n72s" [b659d652-9af1-45eb-be9e-129cf428ab14] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:29.537251  893657 system_pods.go:61] "etcd-default-k8s-diff-port-225657" [dbe38dcd-09b3-4851-86f2-4fb392116d0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:29.537275  893657 system_pods.go:61] "kindnet-s5z6t" [29aebd79-e3bf-4715-b6f3-a8ea5baea1eb] Running
	I1217 08:33:29.537287  893657 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-225657" [4a366d74-1d29-4c42-a640-fba99cb73d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:29.537302  893657 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-225657" [93343562-5cd0-417d-b171-dc305e580cf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:29.537311  893657 system_pods.go:61] "kube-proxy-7lhc6" [6a163468-2bc3-4ea8-84ae-bec91b54dd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:29.537373  893657 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-225657" [585543f5-3165-4280-8c40-d86f2358b190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:29.537391  893657 system_pods.go:61] "storage-provisioner" [c3d96f21-a7d0-459b-a164-e9cc1e73add9] Running
	I1217 08:33:29.537403  893657 system_pods.go:74] duration metric: took 6.36482ms to wait for pod list to return data ...
	I1217 08:33:29.537418  893657 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:33:29.540237  893657 default_sa.go:45] found service account: "default"
	I1217 08:33:29.540261  893657 default_sa.go:55] duration metric: took 2.835186ms for default service account to be created ...
	I1217 08:33:29.540272  893657 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:33:29.547420  893657 system_pods.go:86] 8 kube-system pods found
	I1217 08:33:29.547465  893657 system_pods.go:89] "coredns-66bc5c9577-4n72s" [b659d652-9af1-45eb-be9e-129cf428ab14] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:29.547486  893657 system_pods.go:89] "etcd-default-k8s-diff-port-225657" [dbe38dcd-09b3-4851-86f2-4fb392116d0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:29.547494  893657 system_pods.go:89] "kindnet-s5z6t" [29aebd79-e3bf-4715-b6f3-a8ea5baea1eb] Running
	I1217 08:33:29.547502  893657 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-225657" [4a366d74-1d29-4c42-a640-fba99cb73d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:29.547511  893657 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-225657" [93343562-5cd0-417d-b171-dc305e580cf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:29.547519  893657 system_pods.go:89] "kube-proxy-7lhc6" [6a163468-2bc3-4ea8-84ae-bec91b54dd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:29.547526  893657 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-225657" [585543f5-3165-4280-8c40-d86f2358b190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:29.547545  893657 system_pods.go:89] "storage-provisioner" [c3d96f21-a7d0-459b-a164-e9cc1e73add9] Running
	I1217 08:33:29.547556  893657 system_pods.go:126] duration metric: took 7.275351ms to wait for k8s-apps to be running ...
	I1217 08:33:29.547565  893657 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:33:29.547621  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:29.570551  893657 system_svc.go:56] duration metric: took 22.962055ms WaitForService to wait for kubelet
	I1217 08:33:29.570588  893657 kubeadm.go:587] duration metric: took 3.388942328s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:29.570612  893657 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:33:29.573955  893657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:33:29.573987  893657 node_conditions.go:123] node cpu capacity is 8
	I1217 08:33:29.574004  893657 node_conditions.go:105] duration metric: took 3.385946ms to run NodePressure ...
	I1217 08:33:29.574016  893657 start.go:242] waiting for startup goroutines ...
	I1217 08:33:29.574023  893657 start.go:247] waiting for cluster config update ...
	I1217 08:33:29.574033  893657 start.go:256] writing updated cluster config ...
	I1217 08:33:29.574301  893657 ssh_runner.go:195] Run: rm -f paused
	I1217 08:33:29.579418  893657 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:29.583233  893657 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4n72s" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 08:33:31.590019  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:29.790012  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:32.289282  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:31.679660  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:34.180323  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:33.000662  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:34.501698  886345 pod_ready.go:94] pod "coredns-66bc5c9577-p7sqj" is "Ready"
	I1217 08:33:34.501740  886345 pod_ready.go:86] duration metric: took 31.006567227s for pod "coredns-66bc5c9577-p7sqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.504499  886345 pod_ready.go:83] waiting for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.509821  886345 pod_ready.go:94] pod "etcd-embed-certs-581631" is "Ready"
	I1217 08:33:34.509852  886345 pod_ready.go:86] duration metric: took 5.326473ms for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.512747  886345 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.518177  886345 pod_ready.go:94] pod "kube-apiserver-embed-certs-581631" is "Ready"
	I1217 08:33:34.518209  886345 pod_ready.go:86] duration metric: took 5.434504ms for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.520782  886345 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.699712  886345 pod_ready.go:94] pod "kube-controller-manager-embed-certs-581631" is "Ready"
	I1217 08:33:34.699750  886345 pod_ready.go:86] duration metric: took 178.942994ms for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.899576  886345 pod_ready.go:83] waiting for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.299641  886345 pod_ready.go:94] pod "kube-proxy-7z26t" is "Ready"
	I1217 08:33:35.299677  886345 pod_ready.go:86] duration metric: took 400.071136ms for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.499469  886345 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.898985  886345 pod_ready.go:94] pod "kube-scheduler-embed-certs-581631" is "Ready"
	I1217 08:33:35.899016  886345 pod_ready.go:86] duration metric: took 399.518108ms for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.899032  886345 pod_ready.go:40] duration metric: took 32.408536567s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:35.962165  886345 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:33:35.967810  886345 out.go:179] * Done! kubectl is now configured to use "embed-certs-581631" cluster and "default" namespace by default
	I1217 08:33:35.180035  885608 pod_ready.go:94] pod "coredns-5dd5756b68-mr99d" is "Ready"
	I1217 08:33:35.180070  885608 pod_ready.go:86] duration metric: took 33.507046133s for pod "coredns-5dd5756b68-mr99d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.183848  885608 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.189882  885608 pod_ready.go:94] pod "etcd-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.189917  885608 pod_ready.go:86] duration metric: took 6.040788ms for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.193611  885608 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.199327  885608 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.199356  885608 pod_ready.go:86] duration metric: took 5.717005ms for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.202742  885608 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.377269  885608 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.377299  885608 pod_ready.go:86] duration metric: took 174.528391ms for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.578921  885608 pod_ready.go:83] waiting for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.977275  885608 pod_ready.go:94] pod "kube-proxy-cwfwr" is "Ready"
	I1217 08:33:35.977308  885608 pod_ready.go:86] duration metric: took 398.362323ms for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.179026  885608 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.580866  885608 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-640910" is "Ready"
	I1217 08:33:36.580905  885608 pod_ready.go:86] duration metric: took 401.837858ms for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.580922  885608 pod_ready.go:40] duration metric: took 34.912908892s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:36.657518  885608 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 08:33:36.659911  885608 out.go:203] 
	W1217 08:33:36.661799  885608 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 08:33:36.663761  885608 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 08:33:36.666738  885608 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-640910" cluster and "default" namespace by default
	W1217 08:33:34.089133  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:36.092451  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:34.289870  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:36.290714  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:38.589783  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:41.088727  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:38.290930  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:40.789798  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:43.089067  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:45.089693  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:43.290645  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:45.789580  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	I1217 08:33:46.288655  890801 pod_ready.go:94] pod "coredns-7d764666f9-ssxts" is "Ready"
	I1217 08:33:46.288692  890801 pod_ready.go:86] duration metric: took 32.505014626s for pod "coredns-7d764666f9-ssxts" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.291480  890801 pod_ready.go:83] waiting for pod "etcd-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.297312  890801 pod_ready.go:94] pod "etcd-no-preload-936988" is "Ready"
	I1217 08:33:46.297340  890801 pod_ready.go:86] duration metric: took 5.835833ms for pod "etcd-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.392910  890801 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.397502  890801 pod_ready.go:94] pod "kube-apiserver-no-preload-936988" is "Ready"
	I1217 08:33:46.397547  890801 pod_ready.go:86] duration metric: took 4.609982ms for pod "kube-apiserver-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.399936  890801 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.487409  890801 pod_ready.go:94] pod "kube-controller-manager-no-preload-936988" is "Ready"
	I1217 08:33:46.487441  890801 pod_ready.go:86] duration metric: took 87.480941ms for pod "kube-controller-manager-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.687921  890801 pod_ready.go:83] waiting for pod "kube-proxy-rrz8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.087638  890801 pod_ready.go:94] pod "kube-proxy-rrz8t" is "Ready"
	I1217 08:33:47.087672  890801 pod_ready.go:86] duration metric: took 399.721259ms for pod "kube-proxy-rrz8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.287284  890801 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.687063  890801 pod_ready.go:94] pod "kube-scheduler-no-preload-936988" is "Ready"
	I1217 08:33:47.687100  890801 pod_ready.go:86] duration metric: took 399.78978ms for pod "kube-scheduler-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.687115  890801 pod_ready.go:40] duration metric: took 33.909261319s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:47.739016  890801 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 08:33:47.741018  890801 out.go:179] * Done! kubectl is now configured to use "no-preload-936988" cluster and "default" namespace by default
	W1217 08:33:47.589223  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:49.589806  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 17 08:33:20 embed-certs-581631 crio[567]: time="2025-12-17T08:33:20.095718373Z" level=info msg="Started container" PID=1744 containerID=77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper id=f5417407-8ba2-456d-ac7e-3aa5a2de567e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e82ab4c77900e6f286c12781fb0f947a458a8a1fe71bcb0c976013c9ecc253b7
	Dec 17 08:33:20 embed-certs-581631 crio[567]: time="2025-12-17T08:33:20.254926424Z" level=info msg="Removing container: 4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c" id=23f3393f-fa4c-4670-8acd-5fd86be17dc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:20 embed-certs-581631 crio[567]: time="2025-12-17T08:33:20.268035663Z" level=info msg="Removed container 4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper" id=23f3393f-fa4c-4670-8acd-5fd86be17dc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.296363477Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8f3b3f0e-1104-46d7-a82b-141a0ba25785 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.297337996Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1ebf7380-defb-49a5-8fb5-8625048c0c27 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.29847711Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1eda59a7-1e71-49ce-ad9c-f18146dba4fe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.29867976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.30333192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.303526806Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f63ab81c7fde832d05a6398f4c71dbff192ebae776d83d0f17808565dfd0e25d/merged/etc/passwd: no such file or directory"
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.303584818Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f63ab81c7fde832d05a6398f4c71dbff192ebae776d83d0f17808565dfd0e25d/merged/etc/group: no such file or directory"
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.303963504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.331715854Z" level=info msg="Created container 10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949: kube-system/storage-provisioner/storage-provisioner" id=1eda59a7-1e71-49ce-ad9c-f18146dba4fe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.332435653Z" level=info msg="Starting container: 10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949" id=130666be-c045-48d5-b34a-45a60d7362f7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:33 embed-certs-581631 crio[567]: time="2025-12-17T08:33:33.334826433Z" level=info msg="Started container" PID=1758 containerID=10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949 description=kube-system/storage-provisioner/storage-provisioner id=130666be-c045-48d5-b34a-45a60d7362f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c1f33f58ce9c201637c7587ab57c9845ea471636ab5b4de3b09be12f994c2ba5
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.14373502Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=81d1ff9a-6991-4259-87db-4e3d3b07c86d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.14470997Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=853d4788-7a05-4863-8912-da27897825c2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.145804644Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper" id=22488fd4-c8aa-4af2-b920-8eca4dba1dbc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.145944146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.151156222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.151663803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.186001726Z" level=info msg="Created container 6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper" id=22488fd4-c8aa-4af2-b920-8eca4dba1dbc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.186721822Z" level=info msg="Starting container: 6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187" id=67111937-74d5-46ae-9aed-dd8439a7829d name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.188616191Z" level=info msg="Started container" PID=1794 containerID=6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper id=67111937-74d5-46ae-9aed-dd8439a7829d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e82ab4c77900e6f286c12781fb0f947a458a8a1fe71bcb0c976013c9ecc253b7
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.327631139Z" level=info msg="Removing container: 77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26" id=23823168-5dae-481d-b61b-a1312b1f99e1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:42 embed-certs-581631 crio[567]: time="2025-12-17T08:33:42.339210285Z" level=info msg="Removed container 77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz/dashboard-metrics-scraper" id=23823168-5dae-481d-b61b-a1312b1f99e1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6a158a46500d5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   e82ab4c77900e       dashboard-metrics-scraper-6ffb444bf9-g6mkz   kubernetes-dashboard
	10bee23037121       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   c1f33f58ce9c2       storage-provisioner                          kube-system
	477f92307f3a9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   649896cf3187f       kubernetes-dashboard-855c9754f9-xhcfw        kubernetes-dashboard
	74336b447d076       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   ccd13e5699138       coredns-66bc5c9577-p7sqj                     kube-system
	2251d699b8cf5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   abfce39a06c67       busybox                                      default
	8c9c5cb7d5f07       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           50 seconds ago      Running             kube-proxy                  0                   113e530d3a016       kube-proxy-7z26t                             kube-system
	9f7ac5f3d2a15       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 0                   63bfca7af1a55       kindnet-wv7n7                                kube-system
	7ad4a12142571       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   c1f33f58ce9c2       storage-provisioner                          kube-system
	79831ec89cc5a       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           54 seconds ago      Running             kube-apiserver              0                   de73cd7367812       kube-apiserver-embed-certs-581631            kube-system
	c329f979a08a4       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           54 seconds ago      Running             kube-controller-manager     0                   e3708cff3899b       kube-controller-manager-embed-certs-581631   kube-system
	f79f6823e0e24       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           54 seconds ago      Running             kube-scheduler              0                   8a55dba3db917       kube-scheduler-embed-certs-581631            kube-system
	7d3db7fd1bb8c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   e9ef0265e1866       etcd-embed-certs-581631                      kube-system
	
	
	==> coredns [74336b447d076eb6601b50d8a5bbc837099ef6eee8d219659ec500edaf3ae63a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58146 - 23582 "HINFO IN 5569509536565776757.7420217286395446187. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033614361s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-581631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-581631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=embed-certs-581631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_32_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:31:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-581631
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:33:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:33:32 +0000   Wed, 17 Dec 2025 08:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:33:32 +0000   Wed, 17 Dec 2025 08:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:33:32 +0000   Wed, 17 Dec 2025 08:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:33:32 +0000   Wed, 17 Dec 2025 08:32:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-581631
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                54d7a8c2-691a-45c0-b4a2-f9840ad8416b
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-p7sqj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-581631                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-wv7n7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-581631             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-581631    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-7z26t                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-581631             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g6mkz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xhcfw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node embed-certs-581631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node embed-certs-581631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node embed-certs-581631 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-581631 event: Registered Node embed-certs-581631 in Controller
	  Normal  NodeReady                93s                kubelet          Node embed-certs-581631 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node embed-certs-581631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node embed-certs-581631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node embed-certs-581631 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node embed-certs-581631 event: Registered Node embed-certs-581631 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [7d3db7fd1bb8c12d886bfea8c0b0731baaa3804b175ca5e69fc930ef2c9c3881] <==
	{"level":"warn","ts":"2025-12-17T08:33:01.110451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.120267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.136142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.142447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.151844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.159446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.167362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.176743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.185066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.194254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.203030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.213077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.222186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.231223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.240183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.248675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.255744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.263658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.276667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.286201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.294624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.319316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.327128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.335429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:01.408932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49970","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:33:53 up  2:16,  0 user,  load average: 5.02, 4.19, 2.91
	Linux embed-certs-581631 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f7ac5f3d2a15d2684611317f67a3c2210f9c68172e2cf5714548c2f9bd54ef3] <==
	I1217 08:33:02.760793       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:33:02.761169       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 08:33:02.761578       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:33:02.761663       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:33:02.761689       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:33:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:33:03.060672       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:33:03.158057       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:33:03.158203       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:33:03.158602       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:33:03.458692       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:33:03.458741       1 metrics.go:72] Registering metrics
	I1217 08:33:03.458876       1 controller.go:711] "Syncing nftables rules"
	I1217 08:33:13.060653       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:33:13.060833       1 main.go:301] handling current node
	I1217 08:33:23.061671       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:33:23.061715       1 main.go:301] handling current node
	I1217 08:33:33.060856       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:33:33.060893       1 main.go:301] handling current node
	I1217 08:33:43.060598       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:33:43.060637       1 main.go:301] handling current node
	I1217 08:33:53.069616       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 08:33:53.069654       1 main.go:301] handling current node
	
	
	==> kube-apiserver [79831ec89cc5a55b9420cabfb5188263a179325c1b809e4f1b11f241ca39131c] <==
	I1217 08:33:01.914752       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:33:01.914957       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 08:33:01.915225       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 08:33:01.915384       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 08:33:01.915434       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 08:33:01.920956       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 08:33:01.928237       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 08:33:01.928277       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 08:33:01.928242       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 08:33:01.928571       1 aggregator.go:171] initial CRD sync complete...
	I1217 08:33:01.928586       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 08:33:01.928593       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 08:33:01.928600       1 cache.go:39] Caches are synced for autoregister controller
	I1217 08:33:01.961947       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:33:02.171244       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:33:02.211394       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:33:02.260942       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:33:02.286907       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:33:02.297812       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:33:02.352332       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.142.81"}
	I1217 08:33:02.364513       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.85.160"}
	I1217 08:33:02.822268       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:33:05.534102       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:33:05.787911       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:33:05.837077       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c329f979a08a46f2a0e41d1fd5c750409c27319d839acebb55967a6c5075748c] <==
	I1217 08:33:05.241634       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 08:33:05.241647       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 08:33:05.241655       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 08:33:05.244740       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 08:33:05.247946       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 08:33:05.251279       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 08:33:05.280607       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:33:05.280655       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 08:33:05.280703       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 08:33:05.280707       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 08:33:05.280733       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 08:33:05.280768       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 08:33:05.280779       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 08:33:05.280781       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 08:33:05.281014       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 08:33:05.287231       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:33:05.287235       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 08:33:05.288337       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 08:33:05.288406       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:33:05.292604       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 08:33:05.294857       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 08:33:05.297109       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 08:33:05.301698       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 08:33:05.303983       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 08:33:05.304188       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8c9c5cb7d5f07608866c6a6069eb19c6ff6912f829c84b1d6106b6f37984966b] <==
	I1217 08:33:02.542569       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:33:02.619967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:33:02.720570       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:33:02.720615       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 08:33:02.720712       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:33:02.747274       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:33:02.747360       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:33:02.753469       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:33:02.754035       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:33:02.754070       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:02.755745       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:33:02.755822       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:33:02.755856       1 config.go:200] "Starting service config controller"
	I1217 08:33:02.755862       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:33:02.755878       1 config.go:309] "Starting node config controller"
	I1217 08:33:02.755884       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:33:02.755999       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:33:02.756013       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:33:02.856296       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:33:02.856313       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:33:02.856351       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:33:02.856483       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f79f6823e0e24a3f6a2e174ad04b8359e9c8d6e4bc205fed1c8b015b611cd6d2] <==
	I1217 08:32:59.388358       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:33:01.830654       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:33:01.830700       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:33:01.830713       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:33:01.830723       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:33:01.867919       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 08:33:01.867956       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:01.873050       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:01.873091       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:01.874253       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:33:01.874713       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:33:01.974231       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:33:05 embed-certs-581631 kubelet[716]: I1217 08:33:05.990795     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ffd58912-6167-405e-9faa-0ee529b840b9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-g6mkz\" (UID: \"ffd58912-6167-405e-9faa-0ee529b840b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz"
	Dec 17 08:33:05 embed-certs-581631 kubelet[716]: I1217 08:33:05.990842     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccg7\" (UniqueName: \"kubernetes.io/projected/bfd63338-0d19-477c-95f5-82e2f47d96e4-kube-api-access-rccg7\") pod \"kubernetes-dashboard-855c9754f9-xhcfw\" (UID: \"bfd63338-0d19-477c-95f5-82e2f47d96e4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhcfw"
	Dec 17 08:33:05 embed-certs-581631 kubelet[716]: I1217 08:33:05.990884     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpvx5\" (UniqueName: \"kubernetes.io/projected/ffd58912-6167-405e-9faa-0ee529b840b9-kube-api-access-gpvx5\") pod \"dashboard-metrics-scraper-6ffb444bf9-g6mkz\" (UID: \"ffd58912-6167-405e-9faa-0ee529b840b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz"
	Dec 17 08:33:09 embed-certs-581631 kubelet[716]: I1217 08:33:09.210635     716 scope.go:117] "RemoveContainer" containerID="b5911230dc3d9b821b169c61bbfaad8d3e01127f5ec3985d471c2abd6530636c"
	Dec 17 08:33:10 embed-certs-581631 kubelet[716]: I1217 08:33:10.216608     716 scope.go:117] "RemoveContainer" containerID="b5911230dc3d9b821b169c61bbfaad8d3e01127f5ec3985d471c2abd6530636c"
	Dec 17 08:33:10 embed-certs-581631 kubelet[716]: I1217 08:33:10.216921     716 scope.go:117] "RemoveContainer" containerID="4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c"
	Dec 17 08:33:10 embed-certs-581631 kubelet[716]: E1217 08:33:10.217171     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6mkz_kubernetes-dashboard(ffd58912-6167-405e-9faa-0ee529b840b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz" podUID="ffd58912-6167-405e-9faa-0ee529b840b9"
	Dec 17 08:33:11 embed-certs-581631 kubelet[716]: I1217 08:33:11.223607     716 scope.go:117] "RemoveContainer" containerID="4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c"
	Dec 17 08:33:11 embed-certs-581631 kubelet[716]: E1217 08:33:11.223836     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6mkz_kubernetes-dashboard(ffd58912-6167-405e-9faa-0ee529b840b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz" podUID="ffd58912-6167-405e-9faa-0ee529b840b9"
	Dec 17 08:33:14 embed-certs-581631 kubelet[716]: I1217 08:33:14.244108     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhcfw" podStartSLOduration=2.243170727 podStartE2EDuration="9.244085903s" podCreationTimestamp="2025-12-17 08:33:05 +0000 UTC" firstStartedPulling="2025-12-17 08:33:06.258140788 +0000 UTC m=+8.231632807" lastFinishedPulling="2025-12-17 08:33:13.259055955 +0000 UTC m=+15.232547983" observedRunningTime="2025-12-17 08:33:14.244026797 +0000 UTC m=+16.217518851" watchObservedRunningTime="2025-12-17 08:33:14.244085903 +0000 UTC m=+16.217577943"
	Dec 17 08:33:20 embed-certs-581631 kubelet[716]: I1217 08:33:20.027678     716 scope.go:117] "RemoveContainer" containerID="4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c"
	Dec 17 08:33:20 embed-certs-581631 kubelet[716]: I1217 08:33:20.253223     716 scope.go:117] "RemoveContainer" containerID="4bbf5f1406dba482bb6702950d83d7b62e3ab5ad97764169c0c5be7fa736509c"
	Dec 17 08:33:20 embed-certs-581631 kubelet[716]: I1217 08:33:20.253460     716 scope.go:117] "RemoveContainer" containerID="77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26"
	Dec 17 08:33:20 embed-certs-581631 kubelet[716]: E1217 08:33:20.253716     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6mkz_kubernetes-dashboard(ffd58912-6167-405e-9faa-0ee529b840b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz" podUID="ffd58912-6167-405e-9faa-0ee529b840b9"
	Dec 17 08:33:30 embed-certs-581631 kubelet[716]: I1217 08:33:30.027953     716 scope.go:117] "RemoveContainer" containerID="77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26"
	Dec 17 08:33:30 embed-certs-581631 kubelet[716]: E1217 08:33:30.028260     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6mkz_kubernetes-dashboard(ffd58912-6167-405e-9faa-0ee529b840b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz" podUID="ffd58912-6167-405e-9faa-0ee529b840b9"
	Dec 17 08:33:33 embed-certs-581631 kubelet[716]: I1217 08:33:33.295944     716 scope.go:117] "RemoveContainer" containerID="7ad4a121425718b62d24f7786ab1468fe9ccc850850ce4710f25d5a6e5b5f9e0"
	Dec 17 08:33:42 embed-certs-581631 kubelet[716]: I1217 08:33:42.143144     716 scope.go:117] "RemoveContainer" containerID="77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26"
	Dec 17 08:33:42 embed-certs-581631 kubelet[716]: I1217 08:33:42.326137     716 scope.go:117] "RemoveContainer" containerID="77010e83e5c81f8b87fb78e1cc78045e6952c4bac64b769efb9fc9cfe0f9ea26"
	Dec 17 08:33:42 embed-certs-581631 kubelet[716]: I1217 08:33:42.326409     716 scope.go:117] "RemoveContainer" containerID="6a158a46500d53551e0af9d74e567d20f7c8588fac51ca7cb54610d90ffa7187"
	Dec 17 08:33:42 embed-certs-581631 kubelet[716]: E1217 08:33:42.326664     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g6mkz_kubernetes-dashboard(ffd58912-6167-405e-9faa-0ee529b840b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g6mkz" podUID="ffd58912-6167-405e-9faa-0ee529b840b9"
	Dec 17 08:33:48 embed-certs-581631 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:33:48 embed-certs-581631 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:33:48 embed-certs-581631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 08:33:48 embed-certs-581631 systemd[1]: kubelet.service: Consumed 1.921s CPU time.
	
	
	==> kubernetes-dashboard [477f92307f3a906d587ee7835d84c2a4746df3a62118fdf285b9c4f1f4af8391] <==
	2025/12/17 08:33:13 Using namespace: kubernetes-dashboard
	2025/12/17 08:33:13 Using in-cluster config to connect to apiserver
	2025/12/17 08:33:13 Using secret token for csrf signing
	2025/12/17 08:33:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 08:33:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 08:33:13 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 08:33:13 Generating JWE encryption key
	2025/12/17 08:33:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 08:33:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 08:33:13 Initializing JWE encryption key from synchronized object
	2025/12/17 08:33:13 Creating in-cluster Sidecar client
	2025/12/17 08:33:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:13 Serving insecurely on HTTP port: 9090
	2025/12/17 08:33:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:13 Starting overwatch
	
	
	==> storage-provisioner [10bee2303712133eb2b14f2cda2256423485c38866bcbb295ef6a7fbc2d90949] <==
	I1217 08:33:33.349164       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:33:33.357263       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:33:33.357301       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:33:33.360078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:36.816580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:41.076959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:44.675737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:47.730156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:50.752638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:50.757447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:33:50.757620       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:33:50.757755       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79bb53cc-560e-4cfd-b5ff-3872574557fe", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-581631_36bf96ef-f8ea-41d4-8788-3b86608bf5c3 became leader
	I1217 08:33:50.757804       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-581631_36bf96ef-f8ea-41d4-8788-3b86608bf5c3!
	W1217 08:33:50.759976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:50.765300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:33:50.858667       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-581631_36bf96ef-f8ea-41d4-8788-3b86608bf5c3!
	W1217 08:33:52.769222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:52.774412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7ad4a121425718b62d24f7786ab1468fe9ccc850850ce4710f25d5a6e5b5f9e0] <==
	I1217 08:33:02.502281       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 08:33:32.508047       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-581631 -n embed-certs-581631
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-581631 -n embed-certs-581631: exit status 2 (375.458933ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-581631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-640910 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-640910 --alsologtostderr -v=1: exit status 80 (1.618349108s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-640910 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:33:48.521147  897463 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:48.521272  897463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:48.521283  897463 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:48.521289  897463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:48.521601  897463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:48.521914  897463 out.go:368] Setting JSON to false
	I1217 08:33:48.521943  897463 mustload.go:66] Loading cluster: old-k8s-version-640910
	I1217 08:33:48.522282  897463 config.go:182] Loaded profile config "old-k8s-version-640910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 08:33:48.522737  897463 cli_runner.go:164] Run: docker container inspect old-k8s-version-640910 --format={{.State.Status}}
	I1217 08:33:48.542684  897463 host.go:66] Checking if "old-k8s-version-640910" exists ...
	I1217 08:33:48.542983  897463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:48.608333  897463 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-17 08:33:48.597120743 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:48.609085  897463 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-640910 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 08:33:48.611283  897463 out.go:179] * Pausing node old-k8s-version-640910 ... 
	I1217 08:33:48.615355  897463 host.go:66] Checking if "old-k8s-version-640910" exists ...
	I1217 08:33:48.615699  897463 ssh_runner.go:195] Run: systemctl --version
	I1217 08:33:48.615756  897463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-640910
	I1217 08:33:48.640805  897463 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/old-k8s-version-640910/id_ed25519 Username:docker}
	I1217 08:33:48.735703  897463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:48.751050  897463 pause.go:52] kubelet running: true
	I1217 08:33:48.751134  897463 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:33:48.927125  897463 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:33:48.927333  897463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:33:49.006147  897463 cri.go:89] found id: "eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7"
	I1217 08:33:49.006176  897463 cri.go:89] found id: "c38092d4284b468cd95031a8c84e47ceccc47981d16b24e73f4739e5b682ef80"
	I1217 08:33:49.006182  897463 cri.go:89] found id: "960a3bdc04bdfec99662377004d8feee5da2e703cde83c5b0e1933866e6fa0bf"
	I1217 08:33:49.006187  897463 cri.go:89] found id: "5fb702ab95ee453d8978dde38c9619ce951ba86e93125485d40d2786c8f6db2b"
	I1217 08:33:49.006191  897463 cri.go:89] found id: "6530bccce608895b0ddd386856e60241278889a3f8ad76ded6aed426d1ad3908"
	I1217 08:33:49.006195  897463 cri.go:89] found id: "6dfafda4a8376a62774b77f103455cc0d2b5f250398def06c2cf32987520ce06"
	I1217 08:33:49.006200  897463 cri.go:89] found id: "7eafd93060d3f284232024a30952747c643e5687f1522fbae0552e43a2a6bf1b"
	I1217 08:33:49.006205  897463 cri.go:89] found id: "10d55d7be36a6031742b5e41c0ec0b321aa9931156dd81d08e242cbb87042faf"
	I1217 08:33:49.006210  897463 cri.go:89] found id: "37834c35c7b18ff5d4e4d0eadba970120383e32a30ffa825e54e232d12310cd5"
	I1217 08:33:49.006217  897463 cri.go:89] found id: "9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94"
	I1217 08:33:49.006222  897463 cri.go:89] found id: "e2abf3689b240d1c4dda4da29b79bd55387e6acf2a5b5cba769a884d583ac8ea"
	I1217 08:33:49.006227  897463 cri.go:89] found id: ""
	I1217 08:33:49.006280  897463 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:33:49.019775  897463 retry.go:31] will retry after 295.653168ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:49Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:33:49.316356  897463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:49.331634  897463 pause.go:52] kubelet running: false
	I1217 08:33:49.331691  897463 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:33:49.480561  897463 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:33:49.480649  897463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:33:49.555960  897463 cri.go:89] found id: "eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7"
	I1217 08:33:49.555989  897463 cri.go:89] found id: "c38092d4284b468cd95031a8c84e47ceccc47981d16b24e73f4739e5b682ef80"
	I1217 08:33:49.555996  897463 cri.go:89] found id: "960a3bdc04bdfec99662377004d8feee5da2e703cde83c5b0e1933866e6fa0bf"
	I1217 08:33:49.556011  897463 cri.go:89] found id: "5fb702ab95ee453d8978dde38c9619ce951ba86e93125485d40d2786c8f6db2b"
	I1217 08:33:49.556017  897463 cri.go:89] found id: "6530bccce608895b0ddd386856e60241278889a3f8ad76ded6aed426d1ad3908"
	I1217 08:33:49.556022  897463 cri.go:89] found id: "6dfafda4a8376a62774b77f103455cc0d2b5f250398def06c2cf32987520ce06"
	I1217 08:33:49.556027  897463 cri.go:89] found id: "7eafd93060d3f284232024a30952747c643e5687f1522fbae0552e43a2a6bf1b"
	I1217 08:33:49.556032  897463 cri.go:89] found id: "10d55d7be36a6031742b5e41c0ec0b321aa9931156dd81d08e242cbb87042faf"
	I1217 08:33:49.556036  897463 cri.go:89] found id: "37834c35c7b18ff5d4e4d0eadba970120383e32a30ffa825e54e232d12310cd5"
	I1217 08:33:49.556046  897463 cri.go:89] found id: "9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94"
	I1217 08:33:49.556050  897463 cri.go:89] found id: "e2abf3689b240d1c4dda4da29b79bd55387e6acf2a5b5cba769a884d583ac8ea"
	I1217 08:33:49.556055  897463 cri.go:89] found id: ""
	I1217 08:33:49.556107  897463 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:33:49.569405  897463 retry.go:31] will retry after 234.062641ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:49Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:33:49.803668  897463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:49.817050  897463 pause.go:52] kubelet running: false
	I1217 08:33:49.817121  897463 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:33:49.968675  897463 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:33:49.968763  897463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:33:50.046461  897463 cri.go:89] found id: "eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7"
	I1217 08:33:50.046489  897463 cri.go:89] found id: "c38092d4284b468cd95031a8c84e47ceccc47981d16b24e73f4739e5b682ef80"
	I1217 08:33:50.046496  897463 cri.go:89] found id: "960a3bdc04bdfec99662377004d8feee5da2e703cde83c5b0e1933866e6fa0bf"
	I1217 08:33:50.046501  897463 cri.go:89] found id: "5fb702ab95ee453d8978dde38c9619ce951ba86e93125485d40d2786c8f6db2b"
	I1217 08:33:50.046506  897463 cri.go:89] found id: "6530bccce608895b0ddd386856e60241278889a3f8ad76ded6aed426d1ad3908"
	I1217 08:33:50.046511  897463 cri.go:89] found id: "6dfafda4a8376a62774b77f103455cc0d2b5f250398def06c2cf32987520ce06"
	I1217 08:33:50.046515  897463 cri.go:89] found id: "7eafd93060d3f284232024a30952747c643e5687f1522fbae0552e43a2a6bf1b"
	I1217 08:33:50.046519  897463 cri.go:89] found id: "10d55d7be36a6031742b5e41c0ec0b321aa9931156dd81d08e242cbb87042faf"
	I1217 08:33:50.046524  897463 cri.go:89] found id: "37834c35c7b18ff5d4e4d0eadba970120383e32a30ffa825e54e232d12310cd5"
	I1217 08:33:50.046552  897463 cri.go:89] found id: "9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94"
	I1217 08:33:50.046560  897463 cri.go:89] found id: "e2abf3689b240d1c4dda4da29b79bd55387e6acf2a5b5cba769a884d583ac8ea"
	I1217 08:33:50.046581  897463 cri.go:89] found id: ""
	I1217 08:33:50.046638  897463 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:33:50.062125  897463 out.go:203] 
	W1217 08:33:50.063758  897463 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 08:33:50.063781  897463 out.go:285] * 
	* 
	W1217 08:33:50.069505  897463 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 08:33:50.071128  897463 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-640910 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-640910
helpers_test.go:244: (dbg) docker inspect old-k8s-version-640910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265",
	        "Created": "2025-12-17T08:31:29.610221474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885806,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:32:50.046813827Z",
	            "FinishedAt": "2025-12-17T08:32:49.124191037Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/hostname",
	        "HostsPath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/hosts",
	        "LogPath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265-json.log",
	        "Name": "/old-k8s-version-640910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-640910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-640910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265",
	                "LowerDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-640910",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-640910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-640910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-640910",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-640910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c3df21f3f1788dc908a59d068e07799b82212aa71cce8af344dd65f7fccbbcd9",
	            "SandboxKey": "/var/run/docker/netns/c3df21f3f178",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33517"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-640910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b355f632d1e424bfa46e67475c4907bc9f9b97c58ca4b258317e871521160531",
	                    "EndpointID": "3003c77f30effd46f6f625444d026373c6078c471b5bb00e97f16eff0b7331b1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6e:67:12:be:a2:76",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-640910",
	                        "2054167e9d36"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-640910 -n old-k8s-version-640910
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-640910 -n old-k8s-version-640910: exit status 2 (356.583265ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-640910 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-640910 logs -n 25: (1.247777186s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-055130 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo crio config                                                                                                                                                                                                             │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ delete  │ -p bridge-055130                                                                                                                                                                                                                              │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-606497                                                                                                                                                                                                               │ disable-driver-mounts-606497 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-581631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p old-k8s-version-640910 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ stop    │ -p embed-certs-581631 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p no-preload-936988 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-640910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225657 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p no-preload-936988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                   │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                               │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:33:16
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:33:16.728946  893657 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:16.729265  893657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:16.729278  893657 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:16.729285  893657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:16.729634  893657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:16.730240  893657 out.go:368] Setting JSON to false
	I1217 08:33:16.732006  893657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8142,"bootTime":1765952255,"procs":367,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:33:16.732103  893657 start.go:143] virtualization: kvm guest
	I1217 08:33:16.736563  893657 out.go:179] * [default-k8s-diff-port-225657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:33:16.738941  893657 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:33:16.738995  893657 notify.go:221] Checking for updates...
	I1217 08:33:16.742759  893657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:33:16.746850  893657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:16.748597  893657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:33:16.750659  893657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:33:16.753168  893657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:33:16.756488  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:16.757459  893657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:33:16.792888  893657 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:33:16.793019  893657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:16.867744  893657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 08:33:16.854455776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:16.867913  893657 docker.go:319] overlay module found
	I1217 08:33:16.871013  893657 out.go:179] * Using the docker driver based on existing profile
	I1217 08:33:16.873307  893657 start.go:309] selected driver: docker
	I1217 08:33:16.873331  893657 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:16.873487  893657 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:33:16.874376  893657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:16.951072  893657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 08:33:16.935361077 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:16.951510  893657 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:16.951573  893657 cni.go:84] Creating CNI manager for ""
	I1217 08:33:16.951645  893657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:16.951709  893657 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:16.956015  893657 out.go:179] * Starting "default-k8s-diff-port-225657" primary control-plane node in "default-k8s-diff-port-225657" cluster
	I1217 08:33:16.957479  893657 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:33:16.959060  893657 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:33:16.960240  893657 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:33:16.960283  893657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 08:33:16.960307  893657 cache.go:65] Caching tarball of preloaded images
	I1217 08:33:16.960329  893657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:33:16.960440  893657 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:33:16.960458  893657 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 08:33:16.960662  893657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:33:16.986877  893657 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:33:16.986906  893657 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:33:16.986928  893657 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:33:16.986979  893657 start.go:360] acquireMachinesLock for default-k8s-diff-port-225657: {Name:mkf524609fef75b896bc809c6c5673b68f778ced Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:33:16.987060  893657 start.go:364] duration metric: took 53.96µs to acquireMachinesLock for "default-k8s-diff-port-225657"
	I1217 08:33:16.987092  893657 start.go:96] Skipping create...Using existing machine configuration
	I1217 08:33:16.987100  893657 fix.go:54] fixHost starting: 
	I1217 08:33:16.987446  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:17.012833  893657 fix.go:112] recreateIfNeeded on default-k8s-diff-port-225657: state=Stopped err=<nil>
	W1217 08:33:17.012874  893657 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 08:33:13.213803  890801 addons.go:530] duration metric: took 2.18918486s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 08:33:13.705622  890801 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 08:33:13.710080  890801 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 08:33:13.711221  890801 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 08:33:13.711249  890801 api_server.go:131] duration metric: took 506.788041ms to wait for apiserver health ...
	I1217 08:33:13.711258  890801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:33:13.715494  890801 system_pods.go:59] 8 kube-system pods found
	I1217 08:33:13.715559  890801 system_pods.go:61] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:13.715571  890801 system_pods.go:61] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:13.715580  890801 system_pods.go:61] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:33:13.715587  890801 system_pods.go:61] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:13.715598  890801 system_pods.go:61] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:13.715604  890801 system_pods.go:61] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:13.715610  890801 system_pods.go:61] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:13.715617  890801 system_pods.go:61] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:33:13.715626  890801 system_pods.go:74] duration metric: took 4.361363ms to wait for pod list to return data ...
	I1217 08:33:13.715639  890801 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:33:13.718438  890801 default_sa.go:45] found service account: "default"
	I1217 08:33:13.718465  890801 default_sa.go:55] duration metric: took 2.817296ms for default service account to be created ...
	I1217 08:33:13.718477  890801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:33:13.722138  890801 system_pods.go:86] 8 kube-system pods found
	I1217 08:33:13.722180  890801 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:13.722194  890801 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:13.722204  890801 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:33:13.722214  890801 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:13.722223  890801 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:13.722234  890801 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:13.722243  890801 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:13.722259  890801 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:33:13.722272  890801 system_pods.go:126] duration metric: took 3.785279ms to wait for k8s-apps to be running ...
	I1217 08:33:13.722289  890801 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:33:13.722352  890801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:13.737774  890801 system_svc.go:56] duration metric: took 15.474847ms WaitForService to wait for kubelet
	I1217 08:33:13.737805  890801 kubeadm.go:587] duration metric: took 2.713427844s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:13.737833  890801 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:33:13.772714  890801 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:33:13.772756  890801 node_conditions.go:123] node cpu capacity is 8
	I1217 08:33:13.772775  890801 node_conditions.go:105] duration metric: took 34.937186ms to run NodePressure ...
	I1217 08:33:13.772792  890801 start.go:242] waiting for startup goroutines ...
	I1217 08:33:13.772803  890801 start.go:247] waiting for cluster config update ...
	I1217 08:33:13.772825  890801 start.go:256] writing updated cluster config ...
	I1217 08:33:13.773173  890801 ssh_runner.go:195] Run: rm -f paused
	I1217 08:33:13.777812  890801 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:13.783637  890801 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ssxts" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 08:33:15.868337  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:15.181390  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:17.182344  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:19.681167  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:17.003119  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:19.003173  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:21.003325  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:17.017254  893657 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-225657" ...
	I1217 08:33:17.017346  893657 cli_runner.go:164] Run: docker start default-k8s-diff-port-225657
	I1217 08:33:17.373663  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:17.400760  893657 kic.go:432] container "default-k8s-diff-port-225657" state is running.
	I1217 08:33:17.401442  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:17.429446  893657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:33:17.429718  893657 machine.go:94] provisionDockerMachine start ...
	I1217 08:33:17.429809  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:17.458096  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:17.458238  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:17.458254  893657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:33:17.459170  893657 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48512->127.0.0.1:33530: read: connection reset by peer
	I1217 08:33:20.612283  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:33:20.612308  893657 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-225657"
	I1217 08:33:20.612373  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:20.636332  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:20.636502  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:20.636519  893657 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-225657 && echo "default-k8s-diff-port-225657" | sudo tee /etc/hostname
	I1217 08:33:20.804510  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:33:20.804742  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:20.834923  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:20.835091  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:20.835140  893657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-225657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-225657/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-225657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:33:20.984217  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:33:20.984254  893657 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:33:20.984307  893657 ubuntu.go:190] setting up certificates
	I1217 08:33:20.984330  893657 provision.go:84] configureAuth start
	I1217 08:33:20.984434  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:21.010705  893657 provision.go:143] copyHostCerts
	I1217 08:33:21.010798  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:33:21.010816  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:33:21.010896  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:33:21.011010  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:33:21.011024  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:33:21.011068  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:33:21.011154  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:33:21.011165  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:33:21.011204  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:33:21.011353  893657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-225657 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-225657 localhost minikube]
	I1217 08:33:21.094979  893657 provision.go:177] copyRemoteCerts
	I1217 08:33:21.095063  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:33:21.095123  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:21.119755  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:21.226499  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:33:21.252430  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 08:33:21.276413  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 08:33:21.304875  893657 provision.go:87] duration metric: took 320.523082ms to configureAuth
	I1217 08:33:21.304910  893657 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:33:21.305140  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:21.305286  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:21.329333  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:21.329469  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:21.329488  893657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1217 08:33:18.289602  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:20.292744  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:22.296974  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:21.764845  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:24.179988  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	I1217 08:33:22.731689  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:33:22.731722  893657 machine.go:97] duration metric: took 5.301986136s to provisionDockerMachine
	I1217 08:33:22.731749  893657 start.go:293] postStartSetup for "default-k8s-diff-port-225657" (driver="docker")
	I1217 08:33:22.731769  893657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:33:22.731852  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:33:22.731920  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:22.761364  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:22.876306  893657 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:33:22.881359  893657 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:33:22.881395  893657 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:33:22.881410  893657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:33:22.881482  893657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:33:22.881678  893657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:33:22.881825  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:33:22.894563  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:33:22.920348  893657 start.go:296] duration metric: took 188.5726ms for postStartSetup
	I1217 08:33:22.920449  893657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:33:22.920492  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:22.945406  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.048667  893657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:33:23.054963  893657 fix.go:56] duration metric: took 6.067856877s for fixHost
	I1217 08:33:23.054990  893657 start.go:83] releasing machines lock for "default-k8s-diff-port-225657", held for 6.067916149s
	I1217 08:33:23.055062  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:23.078512  893657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:33:23.078652  893657 ssh_runner.go:195] Run: cat /version.json
	I1217 08:33:23.078657  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:23.078715  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:23.105947  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.108771  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.290972  893657 ssh_runner.go:195] Run: systemctl --version
	I1217 08:33:23.299819  893657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:33:23.349000  893657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:33:23.357029  893657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:33:23.357106  893657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:33:23.369670  893657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 08:33:23.369700  893657 start.go:496] detecting cgroup driver to use...
	I1217 08:33:23.369789  893657 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:33:23.369842  893657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:33:23.391525  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:33:23.409286  893657 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:33:23.409355  893657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:33:23.431984  893657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:33:23.448992  893657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:33:23.545374  893657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:33:23.651657  893657 docker.go:234] disabling docker service ...
	I1217 08:33:23.651738  893657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:33:23.671894  893657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:33:23.692032  893657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:33:23.817651  893657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:33:23.939609  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:33:23.958144  893657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:33:23.979250  893657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:33:23.979317  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:23.992227  893657 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:33:23.992295  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.006950  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.020376  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.035025  893657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:33:24.046957  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.061093  893657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.074985  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.089611  893657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:33:24.101042  893657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:33:24.111709  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:24.230001  893657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:33:24.884276  893657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:33:24.884364  893657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:33:24.889824  893657 start.go:564] Will wait 60s for crictl version
	I1217 08:33:24.889930  893657 ssh_runner.go:195] Run: which crictl
	I1217 08:33:24.895473  893657 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:33:24.926169  893657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:33:24.926256  893657 ssh_runner.go:195] Run: crio --version
	I1217 08:33:24.960427  893657 ssh_runner.go:195] Run: crio --version
	I1217 08:33:24.997284  893657 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 08:33:24.999194  893657 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:33:25.022353  893657 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 08:33:25.027067  893657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:33:25.040819  893657 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:33:25.040970  893657 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:33:25.041036  893657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:33:25.078474  893657 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:33:25.078507  893657 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:33:25.078631  893657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:33:25.106774  893657 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:33:25.106807  893657 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:33:25.106818  893657 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1217 08:33:25.106948  893657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-225657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:33:25.107036  893657 ssh_runner.go:195] Run: crio config
	I1217 08:33:25.157252  893657 cni.go:84] Creating CNI manager for ""
	I1217 08:33:25.157281  893657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:25.157301  893657 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:33:25.157340  893657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-225657 NodeName:default-k8s-diff-port-225657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:33:25.157504  893657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-225657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:33:25.157619  893657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 08:33:25.166826  893657 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:33:25.166896  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:33:25.175526  893657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1217 08:33:25.190511  893657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:33:25.205768  893657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 08:33:25.223688  893657 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:33:25.229125  893657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:33:25.242599  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:25.333339  893657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:33:25.360367  893657 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657 for IP: 192.168.103.2
	I1217 08:33:25.360421  893657 certs.go:195] generating shared ca certs ...
	I1217 08:33:25.360443  893657 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:25.360645  893657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:33:25.360690  893657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:33:25.360701  893657 certs.go:257] generating profile certs ...
	I1217 08:33:25.360801  893657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key
	I1217 08:33:25.360866  893657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92
	I1217 08:33:25.360902  893657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key
	I1217 08:33:25.361012  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:33:25.361046  893657 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:33:25.361053  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:33:25.361077  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:33:25.361100  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:33:25.361123  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:33:25.361168  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:33:25.361783  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:33:25.382178  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:33:25.405095  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:33:25.426692  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:33:25.452196  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 08:33:25.472263  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:33:25.492102  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:33:25.512166  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 08:33:25.530987  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:33:25.550506  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:33:25.571554  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:33:25.591167  893657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:33:25.604816  893657 ssh_runner.go:195] Run: openssl version
	I1217 08:33:25.611390  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.620038  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:33:25.628157  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.632565  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.632630  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.668190  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:33:25.677861  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.686457  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:33:25.694766  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.698960  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.699026  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.735265  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:33:25.743914  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.752739  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:33:25.762448  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.766776  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.766841  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.804716  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:33:25.813678  893657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:33:25.818021  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 08:33:25.853937  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 08:33:25.905092  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 08:33:25.949996  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 08:33:25.998953  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 08:33:26.055041  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 08:33:26.093895  893657 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:26.093984  893657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:33:26.094037  893657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:33:26.131324  893657 cri.go:89] found id: "29406bff376a7c4d1050bad268535366dff3136cd50acef8d59f5d2cc53020a9"
	I1217 08:33:26.131350  893657 cri.go:89] found id: "f5429cbfa6cd131e89c3d06fdef6af14325ef0ea1e7bd1bdd6eb0afe6a5a0b52"
	I1217 08:33:26.131356  893657 cri.go:89] found id: "e12c965d867c6ac249f33df13a2d225cba4adb0da8040c834a0dcaba573c7610"
	I1217 08:33:26.131361  893657 cri.go:89] found id: "75f6d050456bf249fe8e7f1b9765eb60db70c90bb28d13cd7f8cf8513dba041d"
	I1217 08:33:26.131366  893657 cri.go:89] found id: ""
	I1217 08:33:26.131415  893657 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 08:33:26.144718  893657 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:26Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:33:26.144807  893657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:33:26.153957  893657 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 08:33:26.153979  893657 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 08:33:26.154032  893657 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 08:33:26.162673  893657 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:33:26.164033  893657 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-225657" does not appear in /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:26.165037  893657 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-552461/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-225657" cluster setting kubeconfig missing "default-k8s-diff-port-225657" context setting]
	I1217 08:33:26.166469  893657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.168992  893657 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 08:33:26.178665  893657 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1217 08:33:26.178709  893657 kubeadm.go:602] duration metric: took 24.72291ms to restartPrimaryControlPlane
	I1217 08:33:26.178722  893657 kubeadm.go:403] duration metric: took 84.838549ms to StartCluster
	I1217 08:33:26.178743  893657 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.178810  893657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:26.181267  893657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.181609  893657 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:33:26.181743  893657 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:33:26.181863  893657 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181869  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:26.181897  893657 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-225657"
	W1217 08:33:26.181907  893657 addons.go:248] addon storage-provisioner should already be in state true
	I1217 08:33:26.181905  893657 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181922  893657 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181933  893657 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-225657"
	I1217 08:33:26.181936  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	W1217 08:33:26.181943  893657 addons.go:248] addon dashboard should already be in state true
	I1217 08:33:26.181946  893657 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-225657"
	I1217 08:33:26.181976  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:33:26.182259  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.182470  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.182505  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.184915  893657 out.go:179] * Verifying Kubernetes components...
	I1217 08:33:26.186210  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:26.212304  893657 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 08:33:26.214226  893657 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:33:26.214980  893657 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 08:33:23.502639  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:26.006843  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:26.216388  893657 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:33:26.216412  893657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:33:26.216477  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.217466  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 08:33:26.217490  893657 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 08:33:26.217560  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.228115  893657 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-225657"
	W1217 08:33:26.228150  893657 addons.go:248] addon default-storageclass should already be in state true
	I1217 08:33:26.228184  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:33:26.228704  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.261124  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.263048  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.276039  893657 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:33:26.276071  893657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:33:26.276135  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.304101  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.360397  893657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:33:26.376999  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 08:33:26.377127  893657 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 08:33:26.380755  893657 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-225657" to be "Ready" ...
	I1217 08:33:26.392863  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 08:33:26.392899  893657 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 08:33:26.392976  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:33:26.413220  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:33:26.414384  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 08:33:26.414420  893657 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 08:33:26.434913  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 08:33:26.434938  893657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 08:33:26.451283  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 08:33:26.451316  893657 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 08:33:26.476854  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 08:33:26.476882  893657 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 08:33:26.492758  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 08:33:26.492796  893657 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 08:33:26.508872  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 08:33:26.508899  893657 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 08:33:26.524202  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 08:33:26.524232  893657 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 08:33:26.539724  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 08:33:24.789524  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:27.291456  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	I1217 08:33:27.959793  893657 node_ready.go:49] node "default-k8s-diff-port-225657" is "Ready"
	I1217 08:33:27.959838  893657 node_ready.go:38] duration metric: took 1.579048972s for node "default-k8s-diff-port-225657" to be "Ready" ...
	I1217 08:33:27.959857  893657 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:33:27.959926  893657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:33:28.524393  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.131379535s)
	I1217 08:33:28.524466  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.111215264s)
	I1217 08:33:28.524703  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.98493694s)
	I1217 08:33:28.524763  893657 api_server.go:72] duration metric: took 2.343114327s to wait for apiserver process to appear ...
	I1217 08:33:28.524791  893657 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:33:28.524815  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:28.526653  893657 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-225657 addons enable metrics-server
	
	I1217 08:33:28.530002  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:33:28.530034  893657 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:33:28.535131  893657 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1217 08:33:26.679455  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:29.179302  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:28.012078  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:30.501159  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:28.536292  893657 addons.go:530] duration metric: took 2.354557541s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 08:33:29.025630  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:29.030789  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:33:29.030828  893657 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:33:29.525077  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:29.529889  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1217 08:33:29.530993  893657 api_server.go:141] control plane version: v1.34.3
	I1217 08:33:29.531018  893657 api_server.go:131] duration metric: took 1.006217623s to wait for apiserver health ...
	I1217 08:33:29.531030  893657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:33:29.537008  893657 system_pods.go:59] 8 kube-system pods found
	I1217 08:33:29.537148  893657 system_pods.go:61] "coredns-66bc5c9577-4n72s" [b659d652-9af1-45eb-be9e-129cf428ab14] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:29.537251  893657 system_pods.go:61] "etcd-default-k8s-diff-port-225657" [dbe38dcd-09b3-4851-86f2-4fb392116d0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:29.537275  893657 system_pods.go:61] "kindnet-s5z6t" [29aebd79-e3bf-4715-b6f3-a8ea5baea1eb] Running
	I1217 08:33:29.537287  893657 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-225657" [4a366d74-1d29-4c42-a640-fba99cb73d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:29.537302  893657 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-225657" [93343562-5cd0-417d-b171-dc305e580cf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:29.537311  893657 system_pods.go:61] "kube-proxy-7lhc6" [6a163468-2bc3-4ea8-84ae-bec91b54dd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:29.537373  893657 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-225657" [585543f5-3165-4280-8c40-d86f2358b190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:29.537391  893657 system_pods.go:61] "storage-provisioner" [c3d96f21-a7d0-459b-a164-e9cc1e73add9] Running
	I1217 08:33:29.537403  893657 system_pods.go:74] duration metric: took 6.36482ms to wait for pod list to return data ...
	I1217 08:33:29.537418  893657 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:33:29.540237  893657 default_sa.go:45] found service account: "default"
	I1217 08:33:29.540261  893657 default_sa.go:55] duration metric: took 2.835186ms for default service account to be created ...
	I1217 08:33:29.540272  893657 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:33:29.547420  893657 system_pods.go:86] 8 kube-system pods found
	I1217 08:33:29.547465  893657 system_pods.go:89] "coredns-66bc5c9577-4n72s" [b659d652-9af1-45eb-be9e-129cf428ab14] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:29.547486  893657 system_pods.go:89] "etcd-default-k8s-diff-port-225657" [dbe38dcd-09b3-4851-86f2-4fb392116d0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:29.547494  893657 system_pods.go:89] "kindnet-s5z6t" [29aebd79-e3bf-4715-b6f3-a8ea5baea1eb] Running
	I1217 08:33:29.547502  893657 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-225657" [4a366d74-1d29-4c42-a640-fba99cb73d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:29.547511  893657 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-225657" [93343562-5cd0-417d-b171-dc305e580cf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:29.547519  893657 system_pods.go:89] "kube-proxy-7lhc6" [6a163468-2bc3-4ea8-84ae-bec91b54dd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:29.547526  893657 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-225657" [585543f5-3165-4280-8c40-d86f2358b190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:29.547545  893657 system_pods.go:89] "storage-provisioner" [c3d96f21-a7d0-459b-a164-e9cc1e73add9] Running
	I1217 08:33:29.547556  893657 system_pods.go:126] duration metric: took 7.275351ms to wait for k8s-apps to be running ...
	I1217 08:33:29.547565  893657 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:33:29.547621  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:29.570551  893657 system_svc.go:56] duration metric: took 22.962055ms WaitForService to wait for kubelet
	I1217 08:33:29.570588  893657 kubeadm.go:587] duration metric: took 3.388942328s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:29.570612  893657 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:33:29.573955  893657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:33:29.573987  893657 node_conditions.go:123] node cpu capacity is 8
	I1217 08:33:29.574004  893657 node_conditions.go:105] duration metric: took 3.385946ms to run NodePressure ...
	I1217 08:33:29.574016  893657 start.go:242] waiting for startup goroutines ...
	I1217 08:33:29.574023  893657 start.go:247] waiting for cluster config update ...
	I1217 08:33:29.574033  893657 start.go:256] writing updated cluster config ...
	I1217 08:33:29.574301  893657 ssh_runner.go:195] Run: rm -f paused
	I1217 08:33:29.579418  893657 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:29.583233  893657 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4n72s" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 08:33:31.590019  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:29.790012  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:32.289282  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:31.679660  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:34.180323  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:33.000662  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:34.501698  886345 pod_ready.go:94] pod "coredns-66bc5c9577-p7sqj" is "Ready"
	I1217 08:33:34.501740  886345 pod_ready.go:86] duration metric: took 31.006567227s for pod "coredns-66bc5c9577-p7sqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.504499  886345 pod_ready.go:83] waiting for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.509821  886345 pod_ready.go:94] pod "etcd-embed-certs-581631" is "Ready"
	I1217 08:33:34.509852  886345 pod_ready.go:86] duration metric: took 5.326473ms for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.512747  886345 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.518177  886345 pod_ready.go:94] pod "kube-apiserver-embed-certs-581631" is "Ready"
	I1217 08:33:34.518209  886345 pod_ready.go:86] duration metric: took 5.434504ms for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.520782  886345 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.699712  886345 pod_ready.go:94] pod "kube-controller-manager-embed-certs-581631" is "Ready"
	I1217 08:33:34.699750  886345 pod_ready.go:86] duration metric: took 178.942994ms for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.899576  886345 pod_ready.go:83] waiting for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.299641  886345 pod_ready.go:94] pod "kube-proxy-7z26t" is "Ready"
	I1217 08:33:35.299677  886345 pod_ready.go:86] duration metric: took 400.071136ms for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.499469  886345 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.898985  886345 pod_ready.go:94] pod "kube-scheduler-embed-certs-581631" is "Ready"
	I1217 08:33:35.899016  886345 pod_ready.go:86] duration metric: took 399.518108ms for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.899032  886345 pod_ready.go:40] duration metric: took 32.408536567s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:35.962165  886345 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:33:35.967810  886345 out.go:179] * Done! kubectl is now configured to use "embed-certs-581631" cluster and "default" namespace by default
	I1217 08:33:35.180035  885608 pod_ready.go:94] pod "coredns-5dd5756b68-mr99d" is "Ready"
	I1217 08:33:35.180070  885608 pod_ready.go:86] duration metric: took 33.507046133s for pod "coredns-5dd5756b68-mr99d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.183848  885608 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.189882  885608 pod_ready.go:94] pod "etcd-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.189917  885608 pod_ready.go:86] duration metric: took 6.040788ms for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.193611  885608 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.199327  885608 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.199356  885608 pod_ready.go:86] duration metric: took 5.717005ms for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.202742  885608 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.377269  885608 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.377299  885608 pod_ready.go:86] duration metric: took 174.528391ms for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.578921  885608 pod_ready.go:83] waiting for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.977275  885608 pod_ready.go:94] pod "kube-proxy-cwfwr" is "Ready"
	I1217 08:33:35.977308  885608 pod_ready.go:86] duration metric: took 398.362323ms for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.179026  885608 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.580866  885608 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-640910" is "Ready"
	I1217 08:33:36.580905  885608 pod_ready.go:86] duration metric: took 401.837858ms for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.580922  885608 pod_ready.go:40] duration metric: took 34.912908892s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:36.657518  885608 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 08:33:36.659911  885608 out.go:203] 
	W1217 08:33:36.661799  885608 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 08:33:36.663761  885608 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 08:33:36.666738  885608 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-640910" cluster and "default" namespace by default
	W1217 08:33:34.089133  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:36.092451  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:34.289870  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:36.290714  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:38.589783  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:41.088727  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:38.290930  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:40.789798  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:43.089067  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:45.089693  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:43.290645  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:45.789580  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	I1217 08:33:46.288655  890801 pod_ready.go:94] pod "coredns-7d764666f9-ssxts" is "Ready"
	I1217 08:33:46.288692  890801 pod_ready.go:86] duration metric: took 32.505014626s for pod "coredns-7d764666f9-ssxts" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.291480  890801 pod_ready.go:83] waiting for pod "etcd-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.297312  890801 pod_ready.go:94] pod "etcd-no-preload-936988" is "Ready"
	I1217 08:33:46.297340  890801 pod_ready.go:86] duration metric: took 5.835833ms for pod "etcd-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.392910  890801 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.397502  890801 pod_ready.go:94] pod "kube-apiserver-no-preload-936988" is "Ready"
	I1217 08:33:46.397547  890801 pod_ready.go:86] duration metric: took 4.609982ms for pod "kube-apiserver-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.399936  890801 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.487409  890801 pod_ready.go:94] pod "kube-controller-manager-no-preload-936988" is "Ready"
	I1217 08:33:46.487441  890801 pod_ready.go:86] duration metric: took 87.480941ms for pod "kube-controller-manager-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.687921  890801 pod_ready.go:83] waiting for pod "kube-proxy-rrz8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.087638  890801 pod_ready.go:94] pod "kube-proxy-rrz8t" is "Ready"
	I1217 08:33:47.087672  890801 pod_ready.go:86] duration metric: took 399.721259ms for pod "kube-proxy-rrz8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.287284  890801 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.687063  890801 pod_ready.go:94] pod "kube-scheduler-no-preload-936988" is "Ready"
	I1217 08:33:47.687100  890801 pod_ready.go:86] duration metric: took 399.78978ms for pod "kube-scheduler-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.687115  890801 pod_ready.go:40] duration metric: took 33.909261319s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:47.739016  890801 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 08:33:47.741018  890801 out.go:179] * Done! kubectl is now configured to use "no-preload-936988" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 08:33:21 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:21.577715368Z" level=info msg="Started container" PID=1745 containerID=8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper id=ed7df3b7-e127-4c92-87da-614af985a602 name=/runtime.v1.RuntimeService/StartContainer sandboxID=923b68fb1d4cb3895974c326556918bb6b83a0174d8f129ffa9a7982fce05459
	Dec 17 08:33:22 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:22.528422741Z" level=info msg="Removing container: 7c09748c5bdeab6811ef4f61cab26fa9c995e9a4369274a40f44942bb5e9c13f" id=b28eb7cb-7e66-4c8a-91ce-cfc9ade92dcf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:22 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:22.618903029Z" level=info msg="Removed container 7c09748c5bdeab6811ef4f61cab26fa9c995e9a4369274a40f44942bb5e9c13f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper" id=b28eb7cb-7e66-4c8a-91ce-cfc9ade92dcf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.55010955Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8c93ce67-f2b3-4cfc-b64c-1ead1f277e6a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.551241746Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1d6fb1d4-3512-465b-919e-2b5a92d686a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.552281976Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ac7bfc02-9239-4814-a720-cacfa10d3446 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.552419314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.558378699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.558642367Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9aeca9a97dfe7fd1609b80e585f4ca4e576daa02b04ec061fdbda959b599a75c/merged/etc/passwd: no such file or directory"
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.558673484Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9aeca9a97dfe7fd1609b80e585f4ca4e576daa02b04ec061fdbda959b599a75c/merged/etc/group: no such file or directory"
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.559014272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.604121515Z" level=info msg="Created container eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7: kube-system/storage-provisioner/storage-provisioner" id=ac7bfc02-9239-4814-a720-cacfa10d3446 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.604947595Z" level=info msg="Starting container: eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7" id=29f27f2b-0d07-4cc9-86d4-bf7146c08fd3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.60692038Z" level=info msg="Started container" PID=1760 containerID=eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7 description=kube-system/storage-provisioner/storage-provisioner id=29f27f2b-0d07-4cc9-86d4-bf7146c08fd3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d82aa67501575596e485a1fabc1a4471f9b6987b92534e190524d59bcb526f88
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.416649512Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=226faa0c-ba76-45d8-9b95-6a940280423b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.417570838Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2418ec21-6476-4503-b1fa-1f4a1e9b89d0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.418719063Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper" id=db5228bb-2f97-430b-95db-bb342364249b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.418905978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.427158228Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.428002637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.472618895Z" level=info msg="Created container 9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper" id=db5228bb-2f97-430b-95db-bb342364249b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.473435097Z" level=info msg="Starting container: 9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94" id=03bbe247-bd6f-48b3-9580-c677777736de name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.476132655Z" level=info msg="Started container" PID=1777 containerID=9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper id=03bbe247-bd6f-48b3-9580-c677777736de name=/runtime.v1.RuntimeService/StartContainer sandboxID=923b68fb1d4cb3895974c326556918bb6b83a0174d8f129ffa9a7982fce05459
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.572163591Z" level=info msg="Removing container: 8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f" id=8cb419a6-8ac4-4e26-94f2-582d338931cf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.589263364Z" level=info msg="Removed container 8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper" id=8cb419a6-8ac4-4e26-94f2-582d338931cf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	9f9ef42bedc44       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   923b68fb1d4cb       dashboard-metrics-scraper-5f989dc9cf-g9p9n       kubernetes-dashboard
	eded2f3d7dd97       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   d82aa67501575       storage-provisioner                              kube-system
	e2abf3689b240       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   8da509607af97       kubernetes-dashboard-8694d4445c-qvtl9            kubernetes-dashboard
	84ffa3df3899f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   4c149d2c75170       busybox                                          default
	c38092d4284b4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     0                   8882f37a8282f       coredns-5dd5756b68-mr99d                         kube-system
	960a3bdc04bdf       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           50 seconds ago      Running             kube-proxy                  0                   1a3485c40eabf       kube-proxy-cwfwr                                 kube-system
	5fb702ab95ee4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   d82aa67501575       storage-provisioner                              kube-system
	6530bccce6088       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 0                   a86fd6713b58a       kindnet-x9g6n                                    kube-system
	6dfafda4a8376       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   ced912e41202f       kube-apiserver-old-k8s-version-640910            kube-system
	7eafd93060d3f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   ac20085f13316       kube-scheduler-old-k8s-version-640910            kube-system
	10d55d7be36a6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   d679d8868e4d5       etcd-old-k8s-version-640910                      kube-system
	37834c35c7b18       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   f6354c0e2167f       kube-controller-manager-old-k8s-version-640910   kube-system
	
	
	==> coredns [c38092d4284b468cd95031a8c84e47ceccc47981d16b24e73f4739e5b682ef80] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51178 - 31318 "HINFO IN 1666544334221588076.73355886378692974. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.048055049s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-640910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-640910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=old-k8s-version-640910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_31_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:31:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-640910
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:33:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:33:31 +0000   Wed, 17 Dec 2025 08:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:33:31 +0000   Wed, 17 Dec 2025 08:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:33:31 +0000   Wed, 17 Dec 2025 08:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:33:31 +0000   Wed, 17 Dec 2025 08:32:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-640910
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                a3280b33-8da6-4c10-b813-cb05f9aa1448
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-mr99d                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-640910                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-x9g6n                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-640910             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-640910    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-cwfwr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-640910             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-g9p9n        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-qvtl9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s               kubelet          Node old-k8s-version-640910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s               kubelet          Node old-k8s-version-640910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s               kubelet          Node old-k8s-version-640910 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node old-k8s-version-640910 event: Registered Node old-k8s-version-640910 in Controller
	  Normal  NodeReady                93s                kubelet          Node old-k8s-version-640910 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node old-k8s-version-640910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node old-k8s-version-640910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node old-k8s-version-640910 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-640910 event: Registered Node old-k8s-version-640910 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [10d55d7be36a6031742b5e41c0ec0b321aa9931156dd81d08e242cbb87042faf] <==
	{"level":"info","ts":"2025-12-17T08:32:57.031703Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:32:57.031528Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-17T08:32:57.031915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-17T08:32:57.032269Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-17T08:32:57.032455Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T08:32:57.032558Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T08:32:57.035729Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-17T08:32:57.036089Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T08:32:57.036156Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T08:32:57.036296Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T08:32:57.036325Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T08:32:58.917597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T08:32:58.917663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T08:32:58.917706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-17T08:32:58.917722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T08:32:58.91773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-17T08:32:58.917742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-17T08:32:58.917752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-17T08:32:58.924927Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-640910 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T08:32:58.925098Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:32:58.925116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:32:58.926437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T08:32:58.926572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-17T08:32:58.929287Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:32:58.929328Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 08:33:51 up  2:16,  0 user,  load average: 5.46, 4.26, 2.92
	Linux old-k8s-version-640910 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6530bccce608895b0ddd386856e60241278889a3f8ad76ded6aed426d1ad3908] <==
	I1217 08:33:01.037938       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:33:01.038272       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 08:33:01.038463       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:33:01.038491       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:33:01.038518       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:33:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:33:01.243645       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:33:01.243672       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:33:01.243682       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:33:01.243806       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:33:01.634652       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:33:01.634797       1 metrics.go:72] Registering metrics
	I1217 08:33:01.634877       1 controller.go:711] "Syncing nftables rules"
	I1217 08:33:11.245627       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:33:11.245675       1 main.go:301] handling current node
	I1217 08:33:21.244720       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:33:21.244763       1 main.go:301] handling current node
	I1217 08:33:31.244081       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:33:31.244117       1 main.go:301] handling current node
	I1217 08:33:41.244435       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:33:41.244489       1 main.go:301] handling current node
	I1217 08:33:51.250250       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:33:51.250281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6dfafda4a8376a62774b77f103455cc0d2b5f250398def06c2cf32987520ce06] <==
	I1217 08:33:00.204425       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1217 08:33:00.302680       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1217 08:33:00.310449       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1217 08:33:00.310583       1 shared_informer.go:318] Caches are synced for configmaps
	I1217 08:33:00.311306       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 08:33:00.311430       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1217 08:33:00.311791       1 aggregator.go:166] initial CRD sync complete...
	I1217 08:33:00.311811       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 08:33:00.311819       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 08:33:00.311826       1 cache.go:39] Caches are synced for autoregister controller
	I1217 08:33:00.311848       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1217 08:33:00.313339       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1217 08:33:00.313398       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1217 08:33:00.351370       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:33:01.207631       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:33:01.468818       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 08:33:01.517911       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 08:33:01.539696       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:33:01.550017       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:33:01.561817       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 08:33:01.602424       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.250.103"}
	I1217 08:33:01.616559       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.39.80"}
	I1217 08:33:12.646566       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1217 08:33:12.652778       1 controller.go:624] quota admission added evaluator for: endpoints
	I1217 08:33:12.681720       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [37834c35c7b18ff5d4e4d0eadba970120383e32a30ffa825e54e232d12310cd5] <==
	I1217 08:33:12.701652       1 shared_informer.go:318] Caches are synced for namespace
	I1217 08:33:12.721496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.337368ms"
	I1217 08:33:12.722101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="172.17µs"
	I1217 08:33:12.733326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.832µs"
	I1217 08:33:12.744025       1 shared_informer.go:318] Caches are synced for stateful set
	I1217 08:33:12.757376       1 shared_informer.go:318] Caches are synced for service account
	I1217 08:33:12.791611       1 shared_informer.go:318] Caches are synced for ephemeral
	I1217 08:33:12.816323       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 08:33:12.824893       1 shared_informer.go:318] Caches are synced for persistent volume
	I1217 08:33:12.828124       1 shared_informer.go:318] Caches are synced for attach detach
	I1217 08:33:12.832610       1 shared_informer.go:318] Caches are synced for PVC protection
	I1217 08:33:12.839197       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 08:33:12.839205       1 shared_informer.go:318] Caches are synced for expand
	I1217 08:33:13.181985       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 08:33:13.216197       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 08:33:13.216236       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 08:33:18.545911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.806255ms"
	I1217 08:33:18.547245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="91.745µs"
	I1217 08:33:21.533918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="456.442µs"
	I1217 08:33:22.588883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="192.113µs"
	I1217 08:33:23.545814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.12µs"
	I1217 08:33:34.770070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.460632ms"
	I1217 08:33:34.770220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.861µs"
	I1217 08:33:36.587877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.431µs"
	I1217 08:33:42.993398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.47µs"
	
	
	==> kube-proxy [960a3bdc04bdfec99662377004d8feee5da2e703cde83c5b0e1933866e6fa0bf] <==
	I1217 08:33:00.869506       1 server_others.go:69] "Using iptables proxy"
	I1217 08:33:00.885724       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1217 08:33:00.914234       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:33:00.918008       1 server_others.go:152] "Using iptables Proxier"
	I1217 08:33:00.918056       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 08:33:00.918064       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 08:33:00.918099       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 08:33:00.918368       1 server.go:846] "Version info" version="v1.28.0"
	I1217 08:33:00.918385       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:00.919875       1 config.go:188] "Starting service config controller"
	I1217 08:33:00.919973       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 08:33:00.920047       1 config.go:315] "Starting node config controller"
	I1217 08:33:00.920092       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 08:33:00.920382       1 config.go:97] "Starting endpoint slice config controller"
	I1217 08:33:00.920401       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 08:33:01.021957       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 08:33:01.022079       1 shared_informer.go:318] Caches are synced for node config
	I1217 08:33:01.022151       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [7eafd93060d3f284232024a30952747c643e5687f1522fbae0552e43a2a6bf1b] <==
	I1217 08:32:57.781799       1 serving.go:348] Generated self-signed cert in-memory
	W1217 08:33:00.259380       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:33:00.259415       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:33:00.259452       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:33:00.259463       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:33:00.290799       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1217 08:33:00.290843       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:00.292589       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:00.292626       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1217 08:33:00.293811       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1217 08:33:00.293864       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1217 08:33:00.392842       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 08:33:12 old-k8s-version-640910 kubelet[736]: I1217 08:33:12.679745     736 topology_manager.go:215] "Topology Admit Handler" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-g9p9n"
	Dec 17 08:33:12 old-k8s-version-640910 kubelet[736]: I1217 08:33:12.829977     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eaf0b178-b6e1-417d-8664-8d4f909a1c06-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-qvtl9\" (UID: \"eaf0b178-b6e1-417d-8664-8d4f909a1c06\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qvtl9"
	Dec 17 08:33:12 old-k8s-version-640910 kubelet[736]: I1217 08:33:12.830045     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91c58b72-96f0-47f8-be48-b2c65d86af98-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-g9p9n\" (UID: \"91c58b72-96f0-47f8-be48-b2c65d86af98\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n"
	Dec 17 08:33:12 old-k8s-version-640910 kubelet[736]: I1217 08:33:12.830097     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49mm8\" (UniqueName: \"kubernetes.io/projected/eaf0b178-b6e1-417d-8664-8d4f909a1c06-kube-api-access-49mm8\") pod \"kubernetes-dashboard-8694d4445c-qvtl9\" (UID: \"eaf0b178-b6e1-417d-8664-8d4f909a1c06\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qvtl9"
	Dec 17 08:33:12 old-k8s-version-640910 kubelet[736]: I1217 08:33:12.830218     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5g4j\" (UniqueName: \"kubernetes.io/projected/91c58b72-96f0-47f8-be48-b2c65d86af98-kube-api-access-f5g4j\") pod \"dashboard-metrics-scraper-5f989dc9cf-g9p9n\" (UID: \"91c58b72-96f0-47f8-be48-b2c65d86af98\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n"
	Dec 17 08:33:21 old-k8s-version-640910 kubelet[736]: I1217 08:33:21.518044     736 scope.go:117] "RemoveContainer" containerID="7c09748c5bdeab6811ef4f61cab26fa9c995e9a4369274a40f44942bb5e9c13f"
	Dec 17 08:33:21 old-k8s-version-640910 kubelet[736]: I1217 08:33:21.533594     736 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qvtl9" podStartSLOduration=4.916927 podCreationTimestamp="2025-12-17 08:33:12 +0000 UTC" firstStartedPulling="2025-12-17 08:33:13.021241455 +0000 UTC m=+16.714729442" lastFinishedPulling="2025-12-17 08:33:17.637824976 +0000 UTC m=+21.331312957" observedRunningTime="2025-12-17 08:33:18.529057049 +0000 UTC m=+22.222545043" watchObservedRunningTime="2025-12-17 08:33:21.533510515 +0000 UTC m=+25.226998510"
	Dec 17 08:33:22 old-k8s-version-640910 kubelet[736]: I1217 08:33:22.525253     736 scope.go:117] "RemoveContainer" containerID="7c09748c5bdeab6811ef4f61cab26fa9c995e9a4369274a40f44942bb5e9c13f"
	Dec 17 08:33:22 old-k8s-version-640910 kubelet[736]: I1217 08:33:22.525700     736 scope.go:117] "RemoveContainer" containerID="8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f"
	Dec 17 08:33:22 old-k8s-version-640910 kubelet[736]: E1217 08:33:22.526015     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g9p9n_kubernetes-dashboard(91c58b72-96f0-47f8-be48-b2c65d86af98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98"
	Dec 17 08:33:23 old-k8s-version-640910 kubelet[736]: I1217 08:33:23.529953     736 scope.go:117] "RemoveContainer" containerID="8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f"
	Dec 17 08:33:23 old-k8s-version-640910 kubelet[736]: E1217 08:33:23.530286     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g9p9n_kubernetes-dashboard(91c58b72-96f0-47f8-be48-b2c65d86af98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98"
	Dec 17 08:33:24 old-k8s-version-640910 kubelet[736]: I1217 08:33:24.532286     736 scope.go:117] "RemoveContainer" containerID="8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f"
	Dec 17 08:33:24 old-k8s-version-640910 kubelet[736]: E1217 08:33:24.532806     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g9p9n_kubernetes-dashboard(91c58b72-96f0-47f8-be48-b2c65d86af98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98"
	Dec 17 08:33:31 old-k8s-version-640910 kubelet[736]: I1217 08:33:31.549526     736 scope.go:117] "RemoveContainer" containerID="5fb702ab95ee453d8978dde38c9619ce951ba86e93125485d40d2786c8f6db2b"
	Dec 17 08:33:36 old-k8s-version-640910 kubelet[736]: I1217 08:33:36.415959     736 scope.go:117] "RemoveContainer" containerID="8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f"
	Dec 17 08:33:36 old-k8s-version-640910 kubelet[736]: I1217 08:33:36.570752     736 scope.go:117] "RemoveContainer" containerID="8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f"
	Dec 17 08:33:36 old-k8s-version-640910 kubelet[736]: I1217 08:33:36.570959     736 scope.go:117] "RemoveContainer" containerID="9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94"
	Dec 17 08:33:36 old-k8s-version-640910 kubelet[736]: E1217 08:33:36.571305     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g9p9n_kubernetes-dashboard(91c58b72-96f0-47f8-be48-b2c65d86af98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98"
	Dec 17 08:33:42 old-k8s-version-640910 kubelet[736]: I1217 08:33:42.981709     736 scope.go:117] "RemoveContainer" containerID="9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94"
	Dec 17 08:33:42 old-k8s-version-640910 kubelet[736]: E1217 08:33:42.981984     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g9p9n_kubernetes-dashboard(91c58b72-96f0-47f8-be48-b2c65d86af98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98"
	Dec 17 08:33:48 old-k8s-version-640910 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:33:48 old-k8s-version-640910 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:33:48 old-k8s-version-640910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 08:33:48 old-k8s-version-640910 systemd[1]: kubelet.service: Consumed 1.703s CPU time.
	
	
	==> kubernetes-dashboard [e2abf3689b240d1c4dda4da29b79bd55387e6acf2a5b5cba769a884d583ac8ea] <==
	2025/12/17 08:33:17 Using namespace: kubernetes-dashboard
	2025/12/17 08:33:17 Using in-cluster config to connect to apiserver
	2025/12/17 08:33:17 Using secret token for csrf signing
	2025/12/17 08:33:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 08:33:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 08:33:17 Successful initial request to the apiserver, version: v1.28.0
	2025/12/17 08:33:17 Generating JWE encryption key
	2025/12/17 08:33:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 08:33:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 08:33:17 Initializing JWE encryption key from synchronized object
	2025/12/17 08:33:17 Creating in-cluster Sidecar client
	2025/12/17 08:33:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:17 Serving insecurely on HTTP port: 9090
	2025/12/17 08:33:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:17 Starting overwatch
	
	
	==> storage-provisioner [5fb702ab95ee453d8978dde38c9619ce951ba86e93125485d40d2786c8f6db2b] <==
	I1217 08:33:00.808403       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 08:33:30.812024       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7] <==
	I1217 08:33:31.623822       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:33:31.640573       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:33:31.640702       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 08:33:49.039434       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:33:49.039582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6840e7cf-d238-43b9-83af-eb3cc68a82f2", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-640910_e6971ad4-0d1b-4a3c-92eb-6d387dc2fee5 became leader
	I1217 08:33:49.039634       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-640910_e6971ad4-0d1b-4a3c-92eb-6d387dc2fee5!
	I1217 08:33:49.144232       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-640910_e6971ad4-0d1b-4a3c-92eb-6d387dc2fee5!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-640910 -n old-k8s-version-640910
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-640910 -n old-k8s-version-640910: exit status 2 (383.043285ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-640910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-640910
helpers_test.go:244: (dbg) docker inspect old-k8s-version-640910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265",
	        "Created": "2025-12-17T08:31:29.610221474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885806,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:32:50.046813827Z",
	            "FinishedAt": "2025-12-17T08:32:49.124191037Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/hostname",
	        "HostsPath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/hosts",
	        "LogPath": "/var/lib/docker/containers/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265/2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265-json.log",
	        "Name": "/old-k8s-version-640910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-640910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-640910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2054167e9d36ffd147209eb3e4625249a55fe04a5ad49ea07159203382623265",
	                "LowerDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca903647564b6ef4accc7da7842bf63d83c97c25e1df180af7c2958198e6d3cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-640910",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-640910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-640910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-640910",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-640910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c3df21f3f1788dc908a59d068e07799b82212aa71cce8af344dd65f7fccbbcd9",
	            "SandboxKey": "/var/run/docker/netns/c3df21f3f178",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33517"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-640910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b355f632d1e424bfa46e67475c4907bc9f9b97c58ca4b258317e871521160531",
	                    "EndpointID": "3003c77f30effd46f6f625444d026373c6078c471b5bb00e97f16eff0b7331b1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6e:67:12:be:a2:76",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-640910",
	                        "2054167e9d36"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-640910 -n old-k8s-version-640910
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-640910 -n old-k8s-version-640910: exit status 2 (374.084793ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-640910 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-640910 logs -n 25: (1.281743587s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-055130 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ ssh     │ -p bridge-055130 sudo crio config                                                                                                                                                                                                             │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:31 UTC │
	│ delete  │ -p bridge-055130                                                                                                                                                                                                                              │ bridge-055130                │ jenkins │ v1.37.0 │ 17 Dec 25 08:31 UTC │ 17 Dec 25 08:32 UTC │
	│ delete  │ -p disable-driver-mounts-606497                                                                                                                                                                                                               │ disable-driver-mounts-606497 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-640910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-581631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p old-k8s-version-640910 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ stop    │ -p embed-certs-581631 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p no-preload-936988 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-640910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225657 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p no-preload-936988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                   │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                               │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:33:16
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:33:16.728946  893657 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:16.729265  893657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:16.729278  893657 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:16.729285  893657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:16.729634  893657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:16.730240  893657 out.go:368] Setting JSON to false
	I1217 08:33:16.732006  893657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8142,"bootTime":1765952255,"procs":367,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:33:16.732103  893657 start.go:143] virtualization: kvm guest
	I1217 08:33:16.736563  893657 out.go:179] * [default-k8s-diff-port-225657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:33:16.738941  893657 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:33:16.738995  893657 notify.go:221] Checking for updates...
	I1217 08:33:16.742759  893657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:33:16.746850  893657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:16.748597  893657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:33:16.750659  893657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:33:16.753168  893657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:33:16.756488  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:16.757459  893657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:33:16.792888  893657 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:33:16.793019  893657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:16.867744  893657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 08:33:16.854455776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:16.867913  893657 docker.go:319] overlay module found
	I1217 08:33:16.871013  893657 out.go:179] * Using the docker driver based on existing profile
	I1217 08:33:16.873307  893657 start.go:309] selected driver: docker
	I1217 08:33:16.873331  893657 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:16.873487  893657 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:33:16.874376  893657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:16.951072  893657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 08:33:16.935361077 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:16.951510  893657 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:16.951573  893657 cni.go:84] Creating CNI manager for ""
	I1217 08:33:16.951645  893657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:16.951709  893657 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:16.956015  893657 out.go:179] * Starting "default-k8s-diff-port-225657" primary control-plane node in "default-k8s-diff-port-225657" cluster
	I1217 08:33:16.957479  893657 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:33:16.959060  893657 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:33:16.960240  893657 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:33:16.960283  893657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 08:33:16.960307  893657 cache.go:65] Caching tarball of preloaded images
	I1217 08:33:16.960329  893657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:33:16.960440  893657 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:33:16.960458  893657 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 08:33:16.960662  893657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:33:16.986877  893657 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:33:16.986906  893657 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:33:16.986928  893657 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:33:16.986979  893657 start.go:360] acquireMachinesLock for default-k8s-diff-port-225657: {Name:mkf524609fef75b896bc809c6c5673b68f778ced Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:33:16.987060  893657 start.go:364] duration metric: took 53.96µs to acquireMachinesLock for "default-k8s-diff-port-225657"
	I1217 08:33:16.987092  893657 start.go:96] Skipping create...Using existing machine configuration
	I1217 08:33:16.987100  893657 fix.go:54] fixHost starting: 
	I1217 08:33:16.987446  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:17.012833  893657 fix.go:112] recreateIfNeeded on default-k8s-diff-port-225657: state=Stopped err=<nil>
	W1217 08:33:17.012874  893657 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 08:33:13.213803  890801 addons.go:530] duration metric: took 2.18918486s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 08:33:13.705622  890801 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 08:33:13.710080  890801 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 08:33:13.711221  890801 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 08:33:13.711249  890801 api_server.go:131] duration metric: took 506.788041ms to wait for apiserver health ...
	I1217 08:33:13.711258  890801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:33:13.715494  890801 system_pods.go:59] 8 kube-system pods found
	I1217 08:33:13.715559  890801 system_pods.go:61] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:13.715571  890801 system_pods.go:61] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:13.715580  890801 system_pods.go:61] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:33:13.715587  890801 system_pods.go:61] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:13.715598  890801 system_pods.go:61] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:13.715604  890801 system_pods.go:61] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:13.715610  890801 system_pods.go:61] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:13.715617  890801 system_pods.go:61] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:33:13.715626  890801 system_pods.go:74] duration metric: took 4.361363ms to wait for pod list to return data ...
	I1217 08:33:13.715639  890801 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:33:13.718438  890801 default_sa.go:45] found service account: "default"
	I1217 08:33:13.718465  890801 default_sa.go:55] duration metric: took 2.817296ms for default service account to be created ...
	I1217 08:33:13.718477  890801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:33:13.722138  890801 system_pods.go:86] 8 kube-system pods found
	I1217 08:33:13.722180  890801 system_pods.go:89] "coredns-7d764666f9-ssxts" [db9873f8-e8db-4baa-8894-8deb3f48e4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:13.722194  890801 system_pods.go:89] "etcd-no-preload-936988" [4a703621-5b3b-4c2a-8265-f478ee7a62e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:13.722204  890801 system_pods.go:89] "kindnet-r9bn5" [255a4a0d-7a79-4ee3-93ad-921d40978251] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:33:13.722214  890801 system_pods.go:89] "kube-apiserver-no-preload-936988" [d2af3614-2c40-4bba-aac3-ccc5149d1232] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:13.722223  890801 system_pods.go:89] "kube-controller-manager-no-preload-936988" [3dde7184-6abf-4f14-a166-bafcafec45c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:13.722234  890801 system_pods.go:89] "kube-proxy-rrz8t" [b40fd988-a562-4c15-96e2-da3ecd348a8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:13.722243  890801 system_pods.go:89] "kube-scheduler-no-preload-936988" [95c931e2-544c-42c8-af9e-5f7b5a49ae24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:13.722259  890801 system_pods.go:89] "storage-provisioner" [9765b268-a3ba-4b1f-ac24-d1ad7e741f2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:33:13.722272  890801 system_pods.go:126] duration metric: took 3.785279ms to wait for k8s-apps to be running ...
	I1217 08:33:13.722289  890801 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:33:13.722352  890801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:13.737774  890801 system_svc.go:56] duration metric: took 15.474847ms WaitForService to wait for kubelet
	I1217 08:33:13.737805  890801 kubeadm.go:587] duration metric: took 2.713427844s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:13.737833  890801 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:33:13.772714  890801 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:33:13.772756  890801 node_conditions.go:123] node cpu capacity is 8
	I1217 08:33:13.772775  890801 node_conditions.go:105] duration metric: took 34.937186ms to run NodePressure ...
	I1217 08:33:13.772792  890801 start.go:242] waiting for startup goroutines ...
	I1217 08:33:13.772803  890801 start.go:247] waiting for cluster config update ...
	I1217 08:33:13.772825  890801 start.go:256] writing updated cluster config ...
	I1217 08:33:13.773173  890801 ssh_runner.go:195] Run: rm -f paused
	I1217 08:33:13.777812  890801 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:13.783637  890801 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ssxts" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 08:33:15.868337  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:15.181390  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:17.182344  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:19.681167  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:17.003119  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:19.003173  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:21.003325  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:17.017254  893657 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-225657" ...
	I1217 08:33:17.017346  893657 cli_runner.go:164] Run: docker start default-k8s-diff-port-225657
	I1217 08:33:17.373663  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:17.400760  893657 kic.go:432] container "default-k8s-diff-port-225657" state is running.
	I1217 08:33:17.401442  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:17.429446  893657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/config.json ...
	I1217 08:33:17.429718  893657 machine.go:94] provisionDockerMachine start ...
	I1217 08:33:17.429809  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:17.458096  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:17.458238  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:17.458254  893657 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:33:17.459170  893657 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48512->127.0.0.1:33530: read: connection reset by peer
	I1217 08:33:20.612283  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:33:20.612308  893657 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-225657"
	I1217 08:33:20.612373  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:20.636332  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:20.636502  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:20.636519  893657 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-225657 && echo "default-k8s-diff-port-225657" | sudo tee /etc/hostname
	I1217 08:33:20.804510  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225657
	
	I1217 08:33:20.804742  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:20.834923  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:20.835091  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:20.835140  893657 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-225657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-225657/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-225657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:33:20.984217  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:33:20.984254  893657 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:33:20.984307  893657 ubuntu.go:190] setting up certificates
	I1217 08:33:20.984330  893657 provision.go:84] configureAuth start
	I1217 08:33:20.984434  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:21.010705  893657 provision.go:143] copyHostCerts
	I1217 08:33:21.010798  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:33:21.010816  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:33:21.010896  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:33:21.011010  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:33:21.011024  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:33:21.011068  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:33:21.011154  893657 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:33:21.011165  893657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:33:21.011204  893657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:33:21.011353  893657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-225657 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-225657 localhost minikube]
	I1217 08:33:21.094979  893657 provision.go:177] copyRemoteCerts
	I1217 08:33:21.095063  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:33:21.095123  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:21.119755  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:21.226499  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:33:21.252430  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 08:33:21.276413  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 08:33:21.304875  893657 provision.go:87] duration metric: took 320.523082ms to configureAuth
	I1217 08:33:21.304910  893657 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:33:21.305140  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:21.305286  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:21.329333  893657 main.go:143] libmachine: Using SSH client type: native
	I1217 08:33:21.329469  893657 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1217 08:33:21.329488  893657 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1217 08:33:18.289602  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:20.292744  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:22.296974  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:21.764845  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:24.179988  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	I1217 08:33:22.731689  893657 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:33:22.731722  893657 machine.go:97] duration metric: took 5.301986136s to provisionDockerMachine
	I1217 08:33:22.731749  893657 start.go:293] postStartSetup for "default-k8s-diff-port-225657" (driver="docker")
	I1217 08:33:22.731769  893657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:33:22.731852  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:33:22.731920  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:22.761364  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:22.876306  893657 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:33:22.881359  893657 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:33:22.881395  893657 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:33:22.881410  893657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:33:22.881482  893657 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:33:22.881678  893657 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:33:22.881825  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:33:22.894563  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:33:22.920348  893657 start.go:296] duration metric: took 188.5726ms for postStartSetup
	I1217 08:33:22.920449  893657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:33:22.920492  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:22.945406  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.048667  893657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:33:23.054963  893657 fix.go:56] duration metric: took 6.067856877s for fixHost
	I1217 08:33:23.054990  893657 start.go:83] releasing machines lock for "default-k8s-diff-port-225657", held for 6.067916149s
	I1217 08:33:23.055062  893657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225657
	I1217 08:33:23.078512  893657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:33:23.078652  893657 ssh_runner.go:195] Run: cat /version.json
	I1217 08:33:23.078657  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:23.078715  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:23.105947  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.108771  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:23.290972  893657 ssh_runner.go:195] Run: systemctl --version
	I1217 08:33:23.299819  893657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:33:23.349000  893657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:33:23.357029  893657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:33:23.357106  893657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:33:23.369670  893657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 08:33:23.369700  893657 start.go:496] detecting cgroup driver to use...
	I1217 08:33:23.369789  893657 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:33:23.369842  893657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:33:23.391525  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:33:23.409286  893657 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:33:23.409355  893657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:33:23.431984  893657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:33:23.448992  893657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:33:23.545374  893657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:33:23.651657  893657 docker.go:234] disabling docker service ...
	I1217 08:33:23.651738  893657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:33:23.671894  893657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:33:23.692032  893657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:33:23.817651  893657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:33:23.939609  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:33:23.958144  893657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:33:23.979250  893657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:33:23.979317  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:23.992227  893657 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:33:23.992295  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.006950  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.020376  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.035025  893657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:33:24.046957  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.061093  893657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.074985  893657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:33:24.089611  893657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:33:24.101042  893657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:33:24.111709  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:24.230001  893657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:33:24.884276  893657 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:33:24.884364  893657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:33:24.889824  893657 start.go:564] Will wait 60s for crictl version
	I1217 08:33:24.889930  893657 ssh_runner.go:195] Run: which crictl
	I1217 08:33:24.895473  893657 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:33:24.926169  893657 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:33:24.926256  893657 ssh_runner.go:195] Run: crio --version
	I1217 08:33:24.960427  893657 ssh_runner.go:195] Run: crio --version
	I1217 08:33:24.997284  893657 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 08:33:24.999194  893657 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:33:25.022353  893657 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 08:33:25.027067  893657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:33:25.040819  893657 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:33:25.040970  893657 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:33:25.041036  893657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:33:25.078474  893657 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:33:25.078507  893657 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:33:25.078631  893657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:33:25.106774  893657 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:33:25.106807  893657 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:33:25.106818  893657 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1217 08:33:25.106948  893657 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-225657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:33:25.107036  893657 ssh_runner.go:195] Run: crio config
	I1217 08:33:25.157252  893657 cni.go:84] Creating CNI manager for ""
	I1217 08:33:25.157281  893657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:25.157301  893657 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:33:25.157340  893657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-225657 NodeName:default-k8s-diff-port-225657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:33:25.157504  893657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-225657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:33:25.157619  893657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 08:33:25.166826  893657 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:33:25.166896  893657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:33:25.175526  893657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1217 08:33:25.190511  893657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:33:25.205768  893657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 08:33:25.223688  893657 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:33:25.229125  893657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:33:25.242599  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:25.333339  893657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:33:25.360367  893657 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657 for IP: 192.168.103.2
	I1217 08:33:25.360421  893657 certs.go:195] generating shared ca certs ...
	I1217 08:33:25.360443  893657 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:25.360645  893657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:33:25.360690  893657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:33:25.360701  893657 certs.go:257] generating profile certs ...
	I1217 08:33:25.360801  893657 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/client.key
	I1217 08:33:25.360866  893657 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key.632bab92
	I1217 08:33:25.360902  893657 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key
	I1217 08:33:25.361012  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:33:25.361046  893657 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:33:25.361053  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:33:25.361077  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:33:25.361100  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:33:25.361123  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:33:25.361168  893657 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:33:25.361783  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:33:25.382178  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:33:25.405095  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:33:25.426692  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:33:25.452196  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 08:33:25.472263  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:33:25.492102  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:33:25.512166  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/default-k8s-diff-port-225657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 08:33:25.530987  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:33:25.550506  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:33:25.571554  893657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:33:25.591167  893657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:33:25.604816  893657 ssh_runner.go:195] Run: openssl version
	I1217 08:33:25.611390  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.620038  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:33:25.628157  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.632565  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.632630  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:33:25.668190  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:33:25.677861  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.686457  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:33:25.694766  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.698960  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.699026  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:33:25.735265  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:33:25.743914  893657 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.752739  893657 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:33:25.762448  893657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.766776  893657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.766841  893657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:33:25.804716  893657 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:33:25.813678  893657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:33:25.818021  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 08:33:25.853937  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 08:33:25.905092  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 08:33:25.949996  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 08:33:25.998953  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 08:33:26.055041  893657 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 08:33:26.093895  893657 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-225657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-225657 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:26.093984  893657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:33:26.094037  893657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:33:26.131324  893657 cri.go:89] found id: "29406bff376a7c4d1050bad268535366dff3136cd50acef8d59f5d2cc53020a9"
	I1217 08:33:26.131350  893657 cri.go:89] found id: "f5429cbfa6cd131e89c3d06fdef6af14325ef0ea1e7bd1bdd6eb0afe6a5a0b52"
	I1217 08:33:26.131356  893657 cri.go:89] found id: "e12c965d867c6ac249f33df13a2d225cba4adb0da8040c834a0dcaba573c7610"
	I1217 08:33:26.131361  893657 cri.go:89] found id: "75f6d050456bf249fe8e7f1b9765eb60db70c90bb28d13cd7f8cf8513dba041d"
	I1217 08:33:26.131366  893657 cri.go:89] found id: ""
	I1217 08:33:26.131415  893657 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 08:33:26.144718  893657 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:33:26Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:33:26.144807  893657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:33:26.153957  893657 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 08:33:26.153979  893657 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 08:33:26.154032  893657 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 08:33:26.162673  893657 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:33:26.164033  893657 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-225657" does not appear in /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:26.165037  893657 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-552461/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-225657" cluster setting kubeconfig missing "default-k8s-diff-port-225657" context setting]
	I1217 08:33:26.166469  893657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.168992  893657 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 08:33:26.178665  893657 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1217 08:33:26.178709  893657 kubeadm.go:602] duration metric: took 24.72291ms to restartPrimaryControlPlane
	I1217 08:33:26.178722  893657 kubeadm.go:403] duration metric: took 84.838549ms to StartCluster
	I1217 08:33:26.178743  893657 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.178810  893657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:26.181267  893657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:26.181609  893657 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:33:26.181743  893657 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:33:26.181863  893657 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181869  893657 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:26.181897  893657 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-225657"
	W1217 08:33:26.181907  893657 addons.go:248] addon storage-provisioner should already be in state true
	I1217 08:33:26.181905  893657 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181922  893657 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-225657"
	I1217 08:33:26.181933  893657 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-225657"
	I1217 08:33:26.181936  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	W1217 08:33:26.181943  893657 addons.go:248] addon dashboard should already be in state true
	I1217 08:33:26.181946  893657 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-225657"
	I1217 08:33:26.181976  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:33:26.182259  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.182470  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.182505  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.184915  893657 out.go:179] * Verifying Kubernetes components...
	I1217 08:33:26.186210  893657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:33:26.212304  893657 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 08:33:26.214226  893657 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:33:26.214980  893657 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 08:33:23.502639  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:26.006843  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:26.216388  893657 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:33:26.216412  893657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:33:26.216477  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.217466  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 08:33:26.217490  893657 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 08:33:26.217560  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.228115  893657 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-225657"
	W1217 08:33:26.228150  893657 addons.go:248] addon default-storageclass should already be in state true
	I1217 08:33:26.228184  893657 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:33:26.228704  893657 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:33:26.261124  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.263048  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.276039  893657 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:33:26.276071  893657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:33:26.276135  893657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:33:26.304101  893657 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:33:26.360397  893657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:33:26.376999  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 08:33:26.377127  893657 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 08:33:26.380755  893657 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-225657" to be "Ready" ...
	I1217 08:33:26.392863  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 08:33:26.392899  893657 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 08:33:26.392976  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:33:26.413220  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:33:26.414384  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 08:33:26.414420  893657 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 08:33:26.434913  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 08:33:26.434938  893657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 08:33:26.451283  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 08:33:26.451316  893657 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 08:33:26.476854  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 08:33:26.476882  893657 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 08:33:26.492758  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 08:33:26.492796  893657 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 08:33:26.508872  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 08:33:26.508899  893657 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 08:33:26.524202  893657 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 08:33:26.524232  893657 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 08:33:26.539724  893657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 08:33:24.789524  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:27.291456  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	I1217 08:33:27.959793  893657 node_ready.go:49] node "default-k8s-diff-port-225657" is "Ready"
	I1217 08:33:27.959838  893657 node_ready.go:38] duration metric: took 1.579048972s for node "default-k8s-diff-port-225657" to be "Ready" ...
	I1217 08:33:27.959857  893657 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:33:27.959926  893657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:33:28.524393  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.131379535s)
	I1217 08:33:28.524466  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.111215264s)
	I1217 08:33:28.524703  893657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.98493694s)
	I1217 08:33:28.524763  893657 api_server.go:72] duration metric: took 2.343114327s to wait for apiserver process to appear ...
	I1217 08:33:28.524791  893657 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:33:28.524815  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:28.526653  893657 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-225657 addons enable metrics-server
	
	I1217 08:33:28.530002  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:33:28.530034  893657 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:33:28.535131  893657 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1217 08:33:26.679455  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:29.179302  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:28.012078  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	W1217 08:33:30.501159  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:28.536292  893657 addons.go:530] duration metric: took 2.354557541s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 08:33:29.025630  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:29.030789  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:33:29.030828  893657 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:33:29.525077  893657 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1217 08:33:29.529889  893657 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1217 08:33:29.530993  893657 api_server.go:141] control plane version: v1.34.3
	I1217 08:33:29.531018  893657 api_server.go:131] duration metric: took 1.006217623s to wait for apiserver health ...
	I1217 08:33:29.531030  893657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:33:29.537008  893657 system_pods.go:59] 8 kube-system pods found
	I1217 08:33:29.537148  893657 system_pods.go:61] "coredns-66bc5c9577-4n72s" [b659d652-9af1-45eb-be9e-129cf428ab14] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:29.537251  893657 system_pods.go:61] "etcd-default-k8s-diff-port-225657" [dbe38dcd-09b3-4851-86f2-4fb392116d0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:29.537275  893657 system_pods.go:61] "kindnet-s5z6t" [29aebd79-e3bf-4715-b6f3-a8ea5baea1eb] Running
	I1217 08:33:29.537287  893657 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-225657" [4a366d74-1d29-4c42-a640-fba99cb73d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:29.537302  893657 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-225657" [93343562-5cd0-417d-b171-dc305e580cf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:29.537311  893657 system_pods.go:61] "kube-proxy-7lhc6" [6a163468-2bc3-4ea8-84ae-bec91b54dd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:29.537373  893657 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-225657" [585543f5-3165-4280-8c40-d86f2358b190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:29.537391  893657 system_pods.go:61] "storage-provisioner" [c3d96f21-a7d0-459b-a164-e9cc1e73add9] Running
	I1217 08:33:29.537403  893657 system_pods.go:74] duration metric: took 6.36482ms to wait for pod list to return data ...
	I1217 08:33:29.537418  893657 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:33:29.540237  893657 default_sa.go:45] found service account: "default"
	I1217 08:33:29.540261  893657 default_sa.go:55] duration metric: took 2.835186ms for default service account to be created ...
	I1217 08:33:29.540272  893657 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:33:29.547420  893657 system_pods.go:86] 8 kube-system pods found
	I1217 08:33:29.547465  893657 system_pods.go:89] "coredns-66bc5c9577-4n72s" [b659d652-9af1-45eb-be9e-129cf428ab14] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 08:33:29.547486  893657 system_pods.go:89] "etcd-default-k8s-diff-port-225657" [dbe38dcd-09b3-4851-86f2-4fb392116d0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:33:29.547494  893657 system_pods.go:89] "kindnet-s5z6t" [29aebd79-e3bf-4715-b6f3-a8ea5baea1eb] Running
	I1217 08:33:29.547502  893657 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-225657" [4a366d74-1d29-4c42-a640-fba99cb73d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:33:29.547511  893657 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-225657" [93343562-5cd0-417d-b171-dc305e580cf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:33:29.547519  893657 system_pods.go:89] "kube-proxy-7lhc6" [6a163468-2bc3-4ea8-84ae-bec91b54dd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:33:29.547526  893657 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-225657" [585543f5-3165-4280-8c40-d86f2358b190] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:33:29.547545  893657 system_pods.go:89] "storage-provisioner" [c3d96f21-a7d0-459b-a164-e9cc1e73add9] Running
	I1217 08:33:29.547556  893657 system_pods.go:126] duration metric: took 7.275351ms to wait for k8s-apps to be running ...
	I1217 08:33:29.547565  893657 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:33:29.547621  893657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:29.570551  893657 system_svc.go:56] duration metric: took 22.962055ms WaitForService to wait for kubelet
	I1217 08:33:29.570588  893657 kubeadm.go:587] duration metric: took 3.388942328s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:33:29.570612  893657 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:33:29.573955  893657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:33:29.573987  893657 node_conditions.go:123] node cpu capacity is 8
	I1217 08:33:29.574004  893657 node_conditions.go:105] duration metric: took 3.385946ms to run NodePressure ...
	I1217 08:33:29.574016  893657 start.go:242] waiting for startup goroutines ...
	I1217 08:33:29.574023  893657 start.go:247] waiting for cluster config update ...
	I1217 08:33:29.574033  893657 start.go:256] writing updated cluster config ...
	I1217 08:33:29.574301  893657 ssh_runner.go:195] Run: rm -f paused
	I1217 08:33:29.579418  893657 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:29.583233  893657 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4n72s" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 08:33:31.590019  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:29.790012  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:32.289282  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:31.679660  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:34.180323  885608 pod_ready.go:104] pod "coredns-5dd5756b68-mr99d" is not "Ready", error: <nil>
	W1217 08:33:33.000662  886345 pod_ready.go:104] pod "coredns-66bc5c9577-p7sqj" is not "Ready", error: <nil>
	I1217 08:33:34.501698  886345 pod_ready.go:94] pod "coredns-66bc5c9577-p7sqj" is "Ready"
	I1217 08:33:34.501740  886345 pod_ready.go:86] duration metric: took 31.006567227s for pod "coredns-66bc5c9577-p7sqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.504499  886345 pod_ready.go:83] waiting for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.509821  886345 pod_ready.go:94] pod "etcd-embed-certs-581631" is "Ready"
	I1217 08:33:34.509852  886345 pod_ready.go:86] duration metric: took 5.326473ms for pod "etcd-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.512747  886345 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.518177  886345 pod_ready.go:94] pod "kube-apiserver-embed-certs-581631" is "Ready"
	I1217 08:33:34.518209  886345 pod_ready.go:86] duration metric: took 5.434504ms for pod "kube-apiserver-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.520782  886345 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.699712  886345 pod_ready.go:94] pod "kube-controller-manager-embed-certs-581631" is "Ready"
	I1217 08:33:34.699750  886345 pod_ready.go:86] duration metric: took 178.942994ms for pod "kube-controller-manager-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:34.899576  886345 pod_ready.go:83] waiting for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.299641  886345 pod_ready.go:94] pod "kube-proxy-7z26t" is "Ready"
	I1217 08:33:35.299677  886345 pod_ready.go:86] duration metric: took 400.071136ms for pod "kube-proxy-7z26t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.499469  886345 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.898985  886345 pod_ready.go:94] pod "kube-scheduler-embed-certs-581631" is "Ready"
	I1217 08:33:35.899016  886345 pod_ready.go:86] duration metric: took 399.518108ms for pod "kube-scheduler-embed-certs-581631" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.899032  886345 pod_ready.go:40] duration metric: took 32.408536567s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:35.962165  886345 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:33:35.967810  886345 out.go:179] * Done! kubectl is now configured to use "embed-certs-581631" cluster and "default" namespace by default
	I1217 08:33:35.180035  885608 pod_ready.go:94] pod "coredns-5dd5756b68-mr99d" is "Ready"
	I1217 08:33:35.180070  885608 pod_ready.go:86] duration metric: took 33.507046133s for pod "coredns-5dd5756b68-mr99d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.183848  885608 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.189882  885608 pod_ready.go:94] pod "etcd-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.189917  885608 pod_ready.go:86] duration metric: took 6.040788ms for pod "etcd-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.193611  885608 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.199327  885608 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.199356  885608 pod_ready.go:86] duration metric: took 5.717005ms for pod "kube-apiserver-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.202742  885608 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.377269  885608 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-640910" is "Ready"
	I1217 08:33:35.377299  885608 pod_ready.go:86] duration metric: took 174.528391ms for pod "kube-controller-manager-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.578921  885608 pod_ready.go:83] waiting for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:35.977275  885608 pod_ready.go:94] pod "kube-proxy-cwfwr" is "Ready"
	I1217 08:33:35.977308  885608 pod_ready.go:86] duration metric: took 398.362323ms for pod "kube-proxy-cwfwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.179026  885608 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.580866  885608 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-640910" is "Ready"
	I1217 08:33:36.580905  885608 pod_ready.go:86] duration metric: took 401.837858ms for pod "kube-scheduler-old-k8s-version-640910" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:36.580922  885608 pod_ready.go:40] duration metric: took 34.912908892s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:36.657518  885608 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 08:33:36.659911  885608 out.go:203] 
	W1217 08:33:36.661799  885608 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 08:33:36.663761  885608 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 08:33:36.666738  885608 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-640910" cluster and "default" namespace by default
	W1217 08:33:34.089133  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:36.092451  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:34.289870  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:36.290714  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:38.589783  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:41.088727  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:38.290930  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:40.789798  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:43.089067  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:45.089693  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:43.290645  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	W1217 08:33:45.789580  890801 pod_ready.go:104] pod "coredns-7d764666f9-ssxts" is not "Ready", error: <nil>
	I1217 08:33:46.288655  890801 pod_ready.go:94] pod "coredns-7d764666f9-ssxts" is "Ready"
	I1217 08:33:46.288692  890801 pod_ready.go:86] duration metric: took 32.505014626s for pod "coredns-7d764666f9-ssxts" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.291480  890801 pod_ready.go:83] waiting for pod "etcd-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.297312  890801 pod_ready.go:94] pod "etcd-no-preload-936988" is "Ready"
	I1217 08:33:46.297340  890801 pod_ready.go:86] duration metric: took 5.835833ms for pod "etcd-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.392910  890801 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.397502  890801 pod_ready.go:94] pod "kube-apiserver-no-preload-936988" is "Ready"
	I1217 08:33:46.397547  890801 pod_ready.go:86] duration metric: took 4.609982ms for pod "kube-apiserver-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.399936  890801 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.487409  890801 pod_ready.go:94] pod "kube-controller-manager-no-preload-936988" is "Ready"
	I1217 08:33:46.487441  890801 pod_ready.go:86] duration metric: took 87.480941ms for pod "kube-controller-manager-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:46.687921  890801 pod_ready.go:83] waiting for pod "kube-proxy-rrz8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.087638  890801 pod_ready.go:94] pod "kube-proxy-rrz8t" is "Ready"
	I1217 08:33:47.087672  890801 pod_ready.go:86] duration metric: took 399.721259ms for pod "kube-proxy-rrz8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.287284  890801 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.687063  890801 pod_ready.go:94] pod "kube-scheduler-no-preload-936988" is "Ready"
	I1217 08:33:47.687100  890801 pod_ready.go:86] duration metric: took 399.78978ms for pod "kube-scheduler-no-preload-936988" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:33:47.687115  890801 pod_ready.go:40] duration metric: took 33.909261319s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:33:47.739016  890801 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 08:33:47.741018  890801 out.go:179] * Done! kubectl is now configured to use "no-preload-936988" cluster and "default" namespace by default
	W1217 08:33:47.589223  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:49.589806  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 17 08:33:21 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:21.577715368Z" level=info msg="Started container" PID=1745 containerID=8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper id=ed7df3b7-e127-4c92-87da-614af985a602 name=/runtime.v1.RuntimeService/StartContainer sandboxID=923b68fb1d4cb3895974c326556918bb6b83a0174d8f129ffa9a7982fce05459
	Dec 17 08:33:22 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:22.528422741Z" level=info msg="Removing container: 7c09748c5bdeab6811ef4f61cab26fa9c995e9a4369274a40f44942bb5e9c13f" id=b28eb7cb-7e66-4c8a-91ce-cfc9ade92dcf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:22 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:22.618903029Z" level=info msg="Removed container 7c09748c5bdeab6811ef4f61cab26fa9c995e9a4369274a40f44942bb5e9c13f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper" id=b28eb7cb-7e66-4c8a-91ce-cfc9ade92dcf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.55010955Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8c93ce67-f2b3-4cfc-b64c-1ead1f277e6a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.551241746Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1d6fb1d4-3512-465b-919e-2b5a92d686a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.552281976Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ac7bfc02-9239-4814-a720-cacfa10d3446 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.552419314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.558378699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.558642367Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9aeca9a97dfe7fd1609b80e585f4ca4e576daa02b04ec061fdbda959b599a75c/merged/etc/passwd: no such file or directory"
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.558673484Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9aeca9a97dfe7fd1609b80e585f4ca4e576daa02b04ec061fdbda959b599a75c/merged/etc/group: no such file or directory"
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.559014272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.604121515Z" level=info msg="Created container eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7: kube-system/storage-provisioner/storage-provisioner" id=ac7bfc02-9239-4814-a720-cacfa10d3446 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.604947595Z" level=info msg="Starting container: eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7" id=29f27f2b-0d07-4cc9-86d4-bf7146c08fd3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:31 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:31.60692038Z" level=info msg="Started container" PID=1760 containerID=eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7 description=kube-system/storage-provisioner/storage-provisioner id=29f27f2b-0d07-4cc9-86d4-bf7146c08fd3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d82aa67501575596e485a1fabc1a4471f9b6987b92534e190524d59bcb526f88
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.416649512Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=226faa0c-ba76-45d8-9b95-6a940280423b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.417570838Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2418ec21-6476-4503-b1fa-1f4a1e9b89d0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.418719063Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper" id=db5228bb-2f97-430b-95db-bb342364249b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.418905978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.427158228Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.428002637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.472618895Z" level=info msg="Created container 9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper" id=db5228bb-2f97-430b-95db-bb342364249b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.473435097Z" level=info msg="Starting container: 9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94" id=03bbe247-bd6f-48b3-9580-c677777736de name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.476132655Z" level=info msg="Started container" PID=1777 containerID=9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper id=03bbe247-bd6f-48b3-9580-c677777736de name=/runtime.v1.RuntimeService/StartContainer sandboxID=923b68fb1d4cb3895974c326556918bb6b83a0174d8f129ffa9a7982fce05459
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.572163591Z" level=info msg="Removing container: 8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f" id=8cb419a6-8ac4-4e26-94f2-582d338931cf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:36 old-k8s-version-640910 crio[570]: time="2025-12-17T08:33:36.589263364Z" level=info msg="Removed container 8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n/dashboard-metrics-scraper" id=8cb419a6-8ac4-4e26-94f2-582d338931cf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	9f9ef42bedc44       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   923b68fb1d4cb       dashboard-metrics-scraper-5f989dc9cf-g9p9n       kubernetes-dashboard
	eded2f3d7dd97       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   d82aa67501575       storage-provisioner                              kube-system
	e2abf3689b240       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   8da509607af97       kubernetes-dashboard-8694d4445c-qvtl9            kubernetes-dashboard
	84ffa3df3899f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   4c149d2c75170       busybox                                          default
	c38092d4284b4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   8882f37a8282f       coredns-5dd5756b68-mr99d                         kube-system
	960a3bdc04bdf       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   1a3485c40eabf       kube-proxy-cwfwr                                 kube-system
	5fb702ab95ee4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   d82aa67501575       storage-provisioner                              kube-system
	6530bccce6088       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 0                   a86fd6713b58a       kindnet-x9g6n                                    kube-system
	6dfafda4a8376       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   ced912e41202f       kube-apiserver-old-k8s-version-640910            kube-system
	7eafd93060d3f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   ac20085f13316       kube-scheduler-old-k8s-version-640910            kube-system
	10d55d7be36a6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   d679d8868e4d5       etcd-old-k8s-version-640910                      kube-system
	37834c35c7b18       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   f6354c0e2167f       kube-controller-manager-old-k8s-version-640910   kube-system
	
	
	==> coredns [c38092d4284b468cd95031a8c84e47ceccc47981d16b24e73f4739e5b682ef80] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51178 - 31318 "HINFO IN 1666544334221588076.73355886378692974. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.048055049s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-640910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-640910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=old-k8s-version-640910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_31_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:31:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-640910
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:33:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:33:31 +0000   Wed, 17 Dec 2025 08:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:33:31 +0000   Wed, 17 Dec 2025 08:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:33:31 +0000   Wed, 17 Dec 2025 08:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:33:31 +0000   Wed, 17 Dec 2025 08:32:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-640910
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                a3280b33-8da6-4c10-b813-cb05f9aa1448
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-mr99d                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-old-k8s-version-640910                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-x9g6n                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-640910             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-640910    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-cwfwr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-640910             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-g9p9n        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-qvtl9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node old-k8s-version-640910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node old-k8s-version-640910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node old-k8s-version-640910 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node old-k8s-version-640910 event: Registered Node old-k8s-version-640910 in Controller
	  Normal  NodeReady                95s                kubelet          Node old-k8s-version-640910 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-640910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-640910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-640910 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-640910 event: Registered Node old-k8s-version-640910 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [10d55d7be36a6031742b5e41c0ec0b321aa9931156dd81d08e242cbb87042faf] <==
	{"level":"info","ts":"2025-12-17T08:32:57.031703Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:32:57.031528Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-17T08:32:57.031915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-17T08:32:57.032269Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-17T08:32:57.032455Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T08:32:57.032558Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T08:32:57.035729Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-17T08:32:57.036089Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T08:32:57.036156Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T08:32:57.036296Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T08:32:57.036325Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T08:32:58.917597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T08:32:58.917663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T08:32:58.917706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-17T08:32:58.917722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T08:32:58.91773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-17T08:32:58.917742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-17T08:32:58.917752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-17T08:32:58.924927Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-640910 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T08:32:58.925098Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:32:58.925116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:32:58.926437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T08:32:58.926572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-17T08:32:58.929287Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:32:58.929328Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 08:33:53 up  2:16,  0 user,  load average: 5.02, 4.19, 2.91
	Linux old-k8s-version-640910 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6530bccce608895b0ddd386856e60241278889a3f8ad76ded6aed426d1ad3908] <==
	I1217 08:33:01.037938       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:33:01.038272       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 08:33:01.038463       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:33:01.038491       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:33:01.038518       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:33:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:33:01.243645       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:33:01.243672       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:33:01.243682       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:33:01.243806       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:33:01.634652       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:33:01.634797       1 metrics.go:72] Registering metrics
	I1217 08:33:01.634877       1 controller.go:711] "Syncing nftables rules"
	I1217 08:33:11.245627       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:33:11.245675       1 main.go:301] handling current node
	I1217 08:33:21.244720       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:33:21.244763       1 main.go:301] handling current node
	I1217 08:33:31.244081       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:33:31.244117       1 main.go:301] handling current node
	I1217 08:33:41.244435       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:33:41.244489       1 main.go:301] handling current node
	I1217 08:33:51.250250       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 08:33:51.250281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6dfafda4a8376a62774b77f103455cc0d2b5f250398def06c2cf32987520ce06] <==
	I1217 08:33:00.204425       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1217 08:33:00.302680       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1217 08:33:00.310449       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1217 08:33:00.310583       1 shared_informer.go:318] Caches are synced for configmaps
	I1217 08:33:00.311306       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 08:33:00.311430       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1217 08:33:00.311791       1 aggregator.go:166] initial CRD sync complete...
	I1217 08:33:00.311811       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 08:33:00.311819       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 08:33:00.311826       1 cache.go:39] Caches are synced for autoregister controller
	I1217 08:33:00.311848       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1217 08:33:00.313339       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1217 08:33:00.313398       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1217 08:33:00.351370       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:33:01.207631       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:33:01.468818       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 08:33:01.517911       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 08:33:01.539696       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:33:01.550017       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:33:01.561817       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 08:33:01.602424       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.250.103"}
	I1217 08:33:01.616559       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.39.80"}
	I1217 08:33:12.646566       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1217 08:33:12.652778       1 controller.go:624] quota admission added evaluator for: endpoints
	I1217 08:33:12.681720       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [37834c35c7b18ff5d4e4d0eadba970120383e32a30ffa825e54e232d12310cd5] <==
	I1217 08:33:12.701652       1 shared_informer.go:318] Caches are synced for namespace
	I1217 08:33:12.721496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.337368ms"
	I1217 08:33:12.722101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="172.17µs"
	I1217 08:33:12.733326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.832µs"
	I1217 08:33:12.744025       1 shared_informer.go:318] Caches are synced for stateful set
	I1217 08:33:12.757376       1 shared_informer.go:318] Caches are synced for service account
	I1217 08:33:12.791611       1 shared_informer.go:318] Caches are synced for ephemeral
	I1217 08:33:12.816323       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 08:33:12.824893       1 shared_informer.go:318] Caches are synced for persistent volume
	I1217 08:33:12.828124       1 shared_informer.go:318] Caches are synced for attach detach
	I1217 08:33:12.832610       1 shared_informer.go:318] Caches are synced for PVC protection
	I1217 08:33:12.839197       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 08:33:12.839205       1 shared_informer.go:318] Caches are synced for expand
	I1217 08:33:13.181985       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 08:33:13.216197       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 08:33:13.216236       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 08:33:18.545911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.806255ms"
	I1217 08:33:18.547245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="91.745µs"
	I1217 08:33:21.533918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="456.442µs"
	I1217 08:33:22.588883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="192.113µs"
	I1217 08:33:23.545814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.12µs"
	I1217 08:33:34.770070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.460632ms"
	I1217 08:33:34.770220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.861µs"
	I1217 08:33:36.587877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.431µs"
	I1217 08:33:42.993398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.47µs"
	
	
	==> kube-proxy [960a3bdc04bdfec99662377004d8feee5da2e703cde83c5b0e1933866e6fa0bf] <==
	I1217 08:33:00.869506       1 server_others.go:69] "Using iptables proxy"
	I1217 08:33:00.885724       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1217 08:33:00.914234       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:33:00.918008       1 server_others.go:152] "Using iptables Proxier"
	I1217 08:33:00.918056       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 08:33:00.918064       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 08:33:00.918099       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 08:33:00.918368       1 server.go:846] "Version info" version="v1.28.0"
	I1217 08:33:00.918385       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:00.919875       1 config.go:188] "Starting service config controller"
	I1217 08:33:00.919973       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 08:33:00.920047       1 config.go:315] "Starting node config controller"
	I1217 08:33:00.920092       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 08:33:00.920382       1 config.go:97] "Starting endpoint slice config controller"
	I1217 08:33:00.920401       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 08:33:01.021957       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 08:33:01.022079       1 shared_informer.go:318] Caches are synced for node config
	I1217 08:33:01.022151       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [7eafd93060d3f284232024a30952747c643e5687f1522fbae0552e43a2a6bf1b] <==
	I1217 08:32:57.781799       1 serving.go:348] Generated self-signed cert in-memory
	W1217 08:33:00.259380       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:33:00.259415       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:33:00.259452       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:33:00.259463       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:33:00.290799       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1217 08:33:00.290843       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:00.292589       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:00.292626       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1217 08:33:00.293811       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1217 08:33:00.293864       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1217 08:33:00.392842       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 08:33:12 old-k8s-version-640910 kubelet[736]: I1217 08:33:12.679745     736 topology_manager.go:215] "Topology Admit Handler" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-g9p9n"
	Dec 17 08:33:12 old-k8s-version-640910 kubelet[736]: I1217 08:33:12.829977     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eaf0b178-b6e1-417d-8664-8d4f909a1c06-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-qvtl9\" (UID: \"eaf0b178-b6e1-417d-8664-8d4f909a1c06\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qvtl9"
	Dec 17 08:33:12 old-k8s-version-640910 kubelet[736]: I1217 08:33:12.830045     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91c58b72-96f0-47f8-be48-b2c65d86af98-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-g9p9n\" (UID: \"91c58b72-96f0-47f8-be48-b2c65d86af98\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n"
	Dec 17 08:33:12 old-k8s-version-640910 kubelet[736]: I1217 08:33:12.830097     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49mm8\" (UniqueName: \"kubernetes.io/projected/eaf0b178-b6e1-417d-8664-8d4f909a1c06-kube-api-access-49mm8\") pod \"kubernetes-dashboard-8694d4445c-qvtl9\" (UID: \"eaf0b178-b6e1-417d-8664-8d4f909a1c06\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qvtl9"
	Dec 17 08:33:12 old-k8s-version-640910 kubelet[736]: I1217 08:33:12.830218     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5g4j\" (UniqueName: \"kubernetes.io/projected/91c58b72-96f0-47f8-be48-b2c65d86af98-kube-api-access-f5g4j\") pod \"dashboard-metrics-scraper-5f989dc9cf-g9p9n\" (UID: \"91c58b72-96f0-47f8-be48-b2c65d86af98\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n"
	Dec 17 08:33:21 old-k8s-version-640910 kubelet[736]: I1217 08:33:21.518044     736 scope.go:117] "RemoveContainer" containerID="7c09748c5bdeab6811ef4f61cab26fa9c995e9a4369274a40f44942bb5e9c13f"
	Dec 17 08:33:21 old-k8s-version-640910 kubelet[736]: I1217 08:33:21.533594     736 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qvtl9" podStartSLOduration=4.916927 podCreationTimestamp="2025-12-17 08:33:12 +0000 UTC" firstStartedPulling="2025-12-17 08:33:13.021241455 +0000 UTC m=+16.714729442" lastFinishedPulling="2025-12-17 08:33:17.637824976 +0000 UTC m=+21.331312957" observedRunningTime="2025-12-17 08:33:18.529057049 +0000 UTC m=+22.222545043" watchObservedRunningTime="2025-12-17 08:33:21.533510515 +0000 UTC m=+25.226998510"
	Dec 17 08:33:22 old-k8s-version-640910 kubelet[736]: I1217 08:33:22.525253     736 scope.go:117] "RemoveContainer" containerID="7c09748c5bdeab6811ef4f61cab26fa9c995e9a4369274a40f44942bb5e9c13f"
	Dec 17 08:33:22 old-k8s-version-640910 kubelet[736]: I1217 08:33:22.525700     736 scope.go:117] "RemoveContainer" containerID="8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f"
	Dec 17 08:33:22 old-k8s-version-640910 kubelet[736]: E1217 08:33:22.526015     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g9p9n_kubernetes-dashboard(91c58b72-96f0-47f8-be48-b2c65d86af98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98"
	Dec 17 08:33:23 old-k8s-version-640910 kubelet[736]: I1217 08:33:23.529953     736 scope.go:117] "RemoveContainer" containerID="8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f"
	Dec 17 08:33:23 old-k8s-version-640910 kubelet[736]: E1217 08:33:23.530286     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g9p9n_kubernetes-dashboard(91c58b72-96f0-47f8-be48-b2c65d86af98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98"
	Dec 17 08:33:24 old-k8s-version-640910 kubelet[736]: I1217 08:33:24.532286     736 scope.go:117] "RemoveContainer" containerID="8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f"
	Dec 17 08:33:24 old-k8s-version-640910 kubelet[736]: E1217 08:33:24.532806     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g9p9n_kubernetes-dashboard(91c58b72-96f0-47f8-be48-b2c65d86af98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98"
	Dec 17 08:33:31 old-k8s-version-640910 kubelet[736]: I1217 08:33:31.549526     736 scope.go:117] "RemoveContainer" containerID="5fb702ab95ee453d8978dde38c9619ce951ba86e93125485d40d2786c8f6db2b"
	Dec 17 08:33:36 old-k8s-version-640910 kubelet[736]: I1217 08:33:36.415959     736 scope.go:117] "RemoveContainer" containerID="8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f"
	Dec 17 08:33:36 old-k8s-version-640910 kubelet[736]: I1217 08:33:36.570752     736 scope.go:117] "RemoveContainer" containerID="8dd9548ee4f535b4f7876aa835bc7c343c1c655d41e9f92d1a08896cdf00b47f"
	Dec 17 08:33:36 old-k8s-version-640910 kubelet[736]: I1217 08:33:36.570959     736 scope.go:117] "RemoveContainer" containerID="9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94"
	Dec 17 08:33:36 old-k8s-version-640910 kubelet[736]: E1217 08:33:36.571305     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g9p9n_kubernetes-dashboard(91c58b72-96f0-47f8-be48-b2c65d86af98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98"
	Dec 17 08:33:42 old-k8s-version-640910 kubelet[736]: I1217 08:33:42.981709     736 scope.go:117] "RemoveContainer" containerID="9f9ef42bedc44b6d28a30f6c590b10b2c2ddbc4b98cca87b72cf81ffc784ff94"
	Dec 17 08:33:42 old-k8s-version-640910 kubelet[736]: E1217 08:33:42.981984     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-g9p9n_kubernetes-dashboard(91c58b72-96f0-47f8-be48-b2c65d86af98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-g9p9n" podUID="91c58b72-96f0-47f8-be48-b2c65d86af98"
	Dec 17 08:33:48 old-k8s-version-640910 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:33:48 old-k8s-version-640910 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:33:48 old-k8s-version-640910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 08:33:48 old-k8s-version-640910 systemd[1]: kubelet.service: Consumed 1.703s CPU time.
	
	
	==> kubernetes-dashboard [e2abf3689b240d1c4dda4da29b79bd55387e6acf2a5b5cba769a884d583ac8ea] <==
	2025/12/17 08:33:17 Using namespace: kubernetes-dashboard
	2025/12/17 08:33:17 Using in-cluster config to connect to apiserver
	2025/12/17 08:33:17 Using secret token for csrf signing
	2025/12/17 08:33:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 08:33:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 08:33:17 Successful initial request to the apiserver, version: v1.28.0
	2025/12/17 08:33:17 Generating JWE encryption key
	2025/12/17 08:33:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 08:33:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 08:33:17 Initializing JWE encryption key from synchronized object
	2025/12/17 08:33:17 Creating in-cluster Sidecar client
	2025/12/17 08:33:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:17 Serving insecurely on HTTP port: 9090
	2025/12/17 08:33:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:17 Starting overwatch
	
	
	==> storage-provisioner [5fb702ab95ee453d8978dde38c9619ce951ba86e93125485d40d2786c8f6db2b] <==
	I1217 08:33:00.808403       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 08:33:30.812024       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [eded2f3d7dd97e65e5fe390a260780f747d0f77a97dafc5d442368f20fc0a0a7] <==
	I1217 08:33:31.623822       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:33:31.640573       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:33:31.640702       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 08:33:49.039434       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:33:49.039582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6840e7cf-d238-43b9-83af-eb3cc68a82f2", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-640910_e6971ad4-0d1b-4a3c-92eb-6d387dc2fee5 became leader
	I1217 08:33:49.039634       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-640910_e6971ad4-0d1b-4a3c-92eb-6d387dc2fee5!
	I1217 08:33:49.144232       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-640910_e6971ad4-0d1b-4a3c-92eb-6d387dc2fee5!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-640910 -n old-k8s-version-640910
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-640910 -n old-k8s-version-640910: exit status 2 (372.869533ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-640910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-936988 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-936988 --alsologtostderr -v=1: exit status 80 (2.389081521s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-936988 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:33:59.536999  901675 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:59.537406  901675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:59.537420  901675 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:59.537428  901675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:59.537831  901675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:59.538094  901675 out.go:368] Setting JSON to false
	I1217 08:33:59.538120  901675 mustload.go:66] Loading cluster: no-preload-936988
	I1217 08:33:59.538494  901675 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:33:59.538956  901675 cli_runner.go:164] Run: docker container inspect no-preload-936988 --format={{.State.Status}}
	I1217 08:33:59.559866  901675 host.go:66] Checking if "no-preload-936988" exists ...
	I1217 08:33:59.560276  901675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:59.644271  901675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-17 08:33:59.630271885 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:59.645183  901675 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-936988 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 08:33:59.651168  901675 out.go:179] * Pausing node no-preload-936988 ... 
	I1217 08:33:59.652788  901675 host.go:66] Checking if "no-preload-936988" exists ...
	I1217 08:33:59.653155  901675 ssh_runner.go:195] Run: systemctl --version
	I1217 08:33:59.653224  901675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-936988
	I1217 08:33:59.674755  901675 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/no-preload-936988/id_ed25519 Username:docker}
	I1217 08:33:59.769673  901675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:33:59.784643  901675 pause.go:52] kubelet running: true
	I1217 08:33:59.784723  901675 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:33:59.963146  901675 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:33:59.963253  901675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:00.038319  901675 cri.go:89] found id: "c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49"
	I1217 08:34:00.038350  901675 cri.go:89] found id: "97c224539dea38a56cbb533af780aa05d37bb359cc8d088424eddccb6b15731c"
	I1217 08:34:00.038355  901675 cri.go:89] found id: "7bd3db764b5be30e4859a6ae8be64b44887f5335de328c01c8daafb3d854fa9f"
	I1217 08:34:00.038360  901675 cri.go:89] found id: "3b91cc9500fd20c359b66af0d72ce511742a4a4dd9d972425281289ffb9c61da"
	I1217 08:34:00.038365  901675 cri.go:89] found id: "19ff69d0515d0cd0446279a3b3fd5791c1422ce192e7952664ecab686fab9e8d"
	I1217 08:34:00.038370  901675 cri.go:89] found id: "4a04dde52df9457946dc002e3ef2af5e7084f2be9788ed459eb1ca335bf1e1ae"
	I1217 08:34:00.038374  901675 cri.go:89] found id: "2162ec4f15a028d94c963365a03ab48f17e5f2617346dd8dde6681d9ad8ff2f2"
	I1217 08:34:00.038378  901675 cri.go:89] found id: "bd4cdae9d96e1e13189d3e67b2e16d8a0ee166f0ac1c8e0a1a5a26b07d42354a"
	I1217 08:34:00.038382  901675 cri.go:89] found id: "ca61f1803e341df3aa08a7f60879f9ad25ad19a3e6ea8bfdbfd5efa6968a6ab8"
	I1217 08:34:00.038391  901675 cri.go:89] found id: "f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9"
	I1217 08:34:00.038395  901675 cri.go:89] found id: "949dd6d69e25f469760e017a43ec1a380bc823b7fbf36261ee8497cb8b8e1b19"
	I1217 08:34:00.038420  901675 cri.go:89] found id: ""
	I1217 08:34:00.038476  901675 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:00.051378  901675 retry.go:31] will retry after 229.242954ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:00Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:34:00.280837  901675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:34:00.296191  901675 pause.go:52] kubelet running: false
	I1217 08:34:00.296246  901675 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:34:00.441074  901675 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:34:00.441191  901675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:00.521618  901675 cri.go:89] found id: "c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49"
	I1217 08:34:00.521648  901675 cri.go:89] found id: "97c224539dea38a56cbb533af780aa05d37bb359cc8d088424eddccb6b15731c"
	I1217 08:34:00.521652  901675 cri.go:89] found id: "7bd3db764b5be30e4859a6ae8be64b44887f5335de328c01c8daafb3d854fa9f"
	I1217 08:34:00.521656  901675 cri.go:89] found id: "3b91cc9500fd20c359b66af0d72ce511742a4a4dd9d972425281289ffb9c61da"
	I1217 08:34:00.521659  901675 cri.go:89] found id: "19ff69d0515d0cd0446279a3b3fd5791c1422ce192e7952664ecab686fab9e8d"
	I1217 08:34:00.521662  901675 cri.go:89] found id: "4a04dde52df9457946dc002e3ef2af5e7084f2be9788ed459eb1ca335bf1e1ae"
	I1217 08:34:00.521665  901675 cri.go:89] found id: "2162ec4f15a028d94c963365a03ab48f17e5f2617346dd8dde6681d9ad8ff2f2"
	I1217 08:34:00.521667  901675 cri.go:89] found id: "bd4cdae9d96e1e13189d3e67b2e16d8a0ee166f0ac1c8e0a1a5a26b07d42354a"
	I1217 08:34:00.521670  901675 cri.go:89] found id: "ca61f1803e341df3aa08a7f60879f9ad25ad19a3e6ea8bfdbfd5efa6968a6ab8"
	I1217 08:34:00.521677  901675 cri.go:89] found id: "f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9"
	I1217 08:34:00.521680  901675 cri.go:89] found id: "949dd6d69e25f469760e017a43ec1a380bc823b7fbf36261ee8497cb8b8e1b19"
	I1217 08:34:00.521682  901675 cri.go:89] found id: ""
	I1217 08:34:00.521720  901675 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:00.534077  901675 retry.go:31] will retry after 321.984928ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:00Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:34:00.856824  901675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:34:00.872042  901675 pause.go:52] kubelet running: false
	I1217 08:34:00.872115  901675 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:34:01.028458  901675 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:34:01.028595  901675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:01.105945  901675 cri.go:89] found id: "c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49"
	I1217 08:34:01.105974  901675 cri.go:89] found id: "97c224539dea38a56cbb533af780aa05d37bb359cc8d088424eddccb6b15731c"
	I1217 08:34:01.105978  901675 cri.go:89] found id: "7bd3db764b5be30e4859a6ae8be64b44887f5335de328c01c8daafb3d854fa9f"
	I1217 08:34:01.105982  901675 cri.go:89] found id: "3b91cc9500fd20c359b66af0d72ce511742a4a4dd9d972425281289ffb9c61da"
	I1217 08:34:01.105984  901675 cri.go:89] found id: "19ff69d0515d0cd0446279a3b3fd5791c1422ce192e7952664ecab686fab9e8d"
	I1217 08:34:01.105991  901675 cri.go:89] found id: "4a04dde52df9457946dc002e3ef2af5e7084f2be9788ed459eb1ca335bf1e1ae"
	I1217 08:34:01.105994  901675 cri.go:89] found id: "2162ec4f15a028d94c963365a03ab48f17e5f2617346dd8dde6681d9ad8ff2f2"
	I1217 08:34:01.105997  901675 cri.go:89] found id: "bd4cdae9d96e1e13189d3e67b2e16d8a0ee166f0ac1c8e0a1a5a26b07d42354a"
	I1217 08:34:01.105999  901675 cri.go:89] found id: "ca61f1803e341df3aa08a7f60879f9ad25ad19a3e6ea8bfdbfd5efa6968a6ab8"
	I1217 08:34:01.106007  901675 cri.go:89] found id: "f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9"
	I1217 08:34:01.106012  901675 cri.go:89] found id: "949dd6d69e25f469760e017a43ec1a380bc823b7fbf36261ee8497cb8b8e1b19"
	I1217 08:34:01.106016  901675 cri.go:89] found id: ""
	I1217 08:34:01.106180  901675 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:01.120181  901675 retry.go:31] will retry after 348.774784ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:01Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:34:01.469833  901675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:34:01.484577  901675 pause.go:52] kubelet running: false
	I1217 08:34:01.484667  901675 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:34:01.626501  901675 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:34:01.626593  901675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:01.700904  901675 cri.go:89] found id: "c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49"
	I1217 08:34:01.700931  901675 cri.go:89] found id: "97c224539dea38a56cbb533af780aa05d37bb359cc8d088424eddccb6b15731c"
	I1217 08:34:01.700935  901675 cri.go:89] found id: "7bd3db764b5be30e4859a6ae8be64b44887f5335de328c01c8daafb3d854fa9f"
	I1217 08:34:01.700939  901675 cri.go:89] found id: "3b91cc9500fd20c359b66af0d72ce511742a4a4dd9d972425281289ffb9c61da"
	I1217 08:34:01.700941  901675 cri.go:89] found id: "19ff69d0515d0cd0446279a3b3fd5791c1422ce192e7952664ecab686fab9e8d"
	I1217 08:34:01.700944  901675 cri.go:89] found id: "4a04dde52df9457946dc002e3ef2af5e7084f2be9788ed459eb1ca335bf1e1ae"
	I1217 08:34:01.700960  901675 cri.go:89] found id: "2162ec4f15a028d94c963365a03ab48f17e5f2617346dd8dde6681d9ad8ff2f2"
	I1217 08:34:01.700963  901675 cri.go:89] found id: "bd4cdae9d96e1e13189d3e67b2e16d8a0ee166f0ac1c8e0a1a5a26b07d42354a"
	I1217 08:34:01.700966  901675 cri.go:89] found id: "ca61f1803e341df3aa08a7f60879f9ad25ad19a3e6ea8bfdbfd5efa6968a6ab8"
	I1217 08:34:01.700972  901675 cri.go:89] found id: "f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9"
	I1217 08:34:01.700975  901675 cri.go:89] found id: "949dd6d69e25f469760e017a43ec1a380bc823b7fbf36261ee8497cb8b8e1b19"
	I1217 08:34:01.700978  901675 cri.go:89] found id: ""
	I1217 08:34:01.701018  901675 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:01.747504  901675 out.go:203] 
	W1217 08:34:01.769701  901675 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 08:34:01.769733  901675 out.go:285] * 
	* 
	W1217 08:34:01.775431  901675 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 08:34:01.841202  901675 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-936988 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-936988
helpers_test.go:244: (dbg) docker inspect no-preload-936988:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2",
	        "Created": "2025-12-17T08:31:34.254013653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 891081,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:33:03.264418822Z",
	            "FinishedAt": "2025-12-17T08:33:02.163066082Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/hosts",
	        "LogPath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2-json.log",
	        "Name": "/no-preload-936988",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-936988:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-936988",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2",
	                "LowerDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-936988",
	                "Source": "/var/lib/docker/volumes/no-preload-936988/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-936988",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-936988",
	                "name.minikube.sigs.k8s.io": "no-preload-936988",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1bc0f0af665b6d0393e2914577b6b690ced729520c6f2f0836515f79d4797bb2",
	            "SandboxKey": "/var/run/docker/netns/1bc0f0af665b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-936988": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31552e72b7c34000bc246afc13fd33f7afa373a22fe9db1908bd57c2a71027fe",
	                    "EndpointID": "323c99fdd2f2cd16f54afa2f4d6370fd12d0ea6264d949bf96b8a179507c1c1f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "8a:21:87:9c:ac:a7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-936988",
	                        "80dedce31e64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-936988 -n no-preload-936988
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-936988 -n no-preload-936988: exit status 2 (340.393571ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-936988 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-936988 logs -n 25: (1.853713067s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p old-k8s-version-640910 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ stop    │ -p embed-certs-581631 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p no-preload-936988 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-640910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225657 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p no-preload-936988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ image   │ no-preload-936988 image list --format=json                                                                                                                                                                                                         │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p no-preload-936988 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:33:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:33:58.209245  901115 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:58.209665  901115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:58.209676  901115 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:58.209684  901115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:58.210014  901115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:58.210825  901115 out.go:368] Setting JSON to false
	I1217 08:33:58.212717  901115 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8183,"bootTime":1765952255,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:33:58.212822  901115 start.go:143] virtualization: kvm guest
	I1217 08:33:58.215300  901115 out.go:179] * [newest-cni-441323] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:33:58.216709  901115 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:33:58.216785  901115 notify.go:221] Checking for updates...
	I1217 08:33:58.219668  901115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:33:58.221003  901115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:58.222299  901115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:33:58.223911  901115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:33:58.225413  901115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:33:58.227137  901115 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:58.227234  901115 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:33:58.227330  901115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:33:58.254422  901115 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:33:58.254541  901115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:58.317710  901115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:33:58.306968846 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:58.317827  901115 docker.go:319] overlay module found
	I1217 08:33:58.319793  901115 out.go:179] * Using the docker driver based on user configuration
	I1217 08:33:58.321113  901115 start.go:309] selected driver: docker
	I1217 08:33:58.321131  901115 start.go:927] validating driver "docker" against <nil>
	I1217 08:33:58.321147  901115 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:33:58.321843  901115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:58.380013  901115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:33:58.36989995 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:58.380231  901115 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 08:33:58.380277  901115 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 08:33:58.380622  901115 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 08:33:58.382961  901115 out.go:179] * Using Docker driver with root privileges
	I1217 08:33:58.384433  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:33:58.384522  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:58.384562  901115 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 08:33:58.384682  901115 start.go:353] cluster config:
	{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:58.386396  901115 out.go:179] * Starting "newest-cni-441323" primary control-plane node in "newest-cni-441323" cluster
	I1217 08:33:58.388055  901115 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:33:58.389524  901115 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:33:58.390839  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:33:58.390896  901115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 08:33:58.390920  901115 cache.go:65] Caching tarball of preloaded images
	I1217 08:33:58.390939  901115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:33:58.391040  901115 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:33:58.391064  901115 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 08:33:58.391182  901115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:33:58.391208  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json: {Name:mkb212e9ad1aef1a5c9052a3b02de8f24d20051c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:58.412428  901115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:33:58.412455  901115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:33:58.412471  901115 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:33:58.412508  901115 start.go:360] acquireMachinesLock for newest-cni-441323: {Name:mk9498dbb1eb77dbf697c7e17cff718c09574836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:33:58.412671  901115 start.go:364] duration metric: took 136.094µs to acquireMachinesLock for "newest-cni-441323"
	I1217 08:33:58.412704  901115 start.go:93] Provisioning new machine with config: &{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:33:58.412808  901115 start.go:125] createHost starting for "" (driver="docker")
	W1217 08:33:57.088758  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:59.594277  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 17 08:33:25 no-preload-936988 crio[570]: time="2025-12-17T08:33:25.259917893Z" level=info msg="Started container" PID=1757 containerID=a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper id=d0b4a56a-47c1-4359-b409-78b98ba8e605 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9436d3683f6aa9c632a2a3a54aae89413931f139956e2e8599a7834246af104
	Dec 17 08:33:26 no-preload-936988 crio[570]: time="2025-12-17T08:33:26.228578232Z" level=info msg="Removing container: 07058e7b2747e875642ce743a86e0246726f7a9360a6106c7546562c1a2d4f7f" id=bcb580d0-d9b2-49ec-b35f-c3642d2c47f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:26 no-preload-936988 crio[570]: time="2025-12-17T08:33:26.244284083Z" level=info msg="Removed container 07058e7b2747e875642ce743a86e0246726f7a9360a6106c7546562c1a2d4f7f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper" id=bcb580d0-d9b2-49ec-b35f-c3642d2c47f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.128701514Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=22c8c4ed-cc41-447a-8025-49d81f710358 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.134362545Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2ce0471e-9194-46c8-95f8-9e4049d211e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.138277805Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper" id=c6ebc101-9292-4a07-afae-780b179c7b3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.138422132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.147489915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.148189538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.183603406Z" level=info msg="Created container f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper" id=c6ebc101-9292-4a07-afae-780b179c7b3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.184453669Z" level=info msg="Starting container: f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9" id=0db8ed76-ebd2-4f8b-832e-985718e66c39 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.187147012Z" level=info msg="Started container" PID=1767 containerID=f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper id=0db8ed76-ebd2-4f8b-832e-985718e66c39 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9436d3683f6aa9c632a2a3a54aae89413931f139956e2e8599a7834246af104
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.255995467Z" level=info msg="Removing container: a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863" id=eb7a2d47-5f08-47b1-9f93-57dfc0b19941 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.268351403Z" level=info msg="Removed container a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper" id=eb7a2d47-5f08-47b1-9f93-57dfc0b19941 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.271908551Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fef4badc-fb27-4115-bc18-cf37aa480729 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.273036789Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fbdba8b5-c7ae-4710-b47b-ba1cc61f4e1f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.274178635Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7d325542-a57b-4343-a7fe-a90a9c57ce50 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.274320545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.278679928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.278860599Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/80a2a01ac8f2a8b1cd0f58352c735d1c5400db9b550c39226e63c74b0fbab999/merged/etc/passwd: no such file or directory"
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.278895872Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/80a2a01ac8f2a8b1cd0f58352c735d1c5400db9b550c39226e63c74b0fbab999/merged/etc/group: no such file or directory"
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.279180075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.311821376Z" level=info msg="Created container c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49: kube-system/storage-provisioner/storage-provisioner" id=7d325542-a57b-4343-a7fe-a90a9c57ce50 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.312517702Z" level=info msg="Starting container: c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49" id=7e04d83b-df24-4849-af3e-e32746191797 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.3146581Z" level=info msg="Started container" PID=1781 containerID=c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49 description=kube-system/storage-provisioner/storage-provisioner id=7e04d83b-df24-4849-af3e-e32746191797 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75e803de12a48bd2d73818683154ecde820816fd14ddcff186f7bf4b3493f3e1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c4688d8cd2141       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   75e803de12a48       storage-provisioner                          kube-system
	f826e28ed163d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   c9436d3683f6a       dashboard-metrics-scraper-867fb5f87b-qrtt2   kubernetes-dashboard
	949dd6d69e25f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   d5d3bea72d011       kubernetes-dashboard-b84665fb8-w6fbt         kubernetes-dashboard
	97c224539dea3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           49 seconds ago      Running             coredns                     0                   e17e752e8e9d0       coredns-7d764666f9-ssxts                     kube-system
	2b28132a279f7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   8ce32e86bf5d2       busybox                                      default
	7bd3db764b5be       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 0                   e8d473762738c       kindnet-r9bn5                                kube-system
	3b91cc9500fd2       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           50 seconds ago      Running             kube-proxy                  0                   ff1d6be8c7ef2       kube-proxy-rrz8t                             kube-system
	19ff69d0515d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   75e803de12a48       storage-provisioner                          kube-system
	4a04dde52df94       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           52 seconds ago      Running             kube-scheduler              0                   4aa2b1c59ba09       kube-scheduler-no-preload-936988             kube-system
	2162ec4f15a02       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           52 seconds ago      Running             etcd                        0                   ce7f621fa4978       etcd-no-preload-936988                       kube-system
	bd4cdae9d96e1       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           52 seconds ago      Running             kube-controller-manager     0                   d40914be0848b       kube-controller-manager-no-preload-936988    kube-system
	ca61f1803e341       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           52 seconds ago      Running             kube-apiserver              0                   237b1843dbb59       kube-apiserver-no-preload-936988             kube-system
	
	
	==> coredns [97c224539dea38a56cbb533af780aa05d37bb359cc8d088424eddccb6b15731c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44393 - 65214 "HINFO IN 3749746112953618346.5123318364958283553. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039850921s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-936988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-936988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=no-preload-936988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_32_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:32:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-936988
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:33:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:33:42 +0000   Wed, 17 Dec 2025 08:32:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:33:42 +0000   Wed, 17 Dec 2025 08:32:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:33:42 +0000   Wed, 17 Dec 2025 08:32:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:33:42 +0000   Wed, 17 Dec 2025 08:32:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-936988
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                84138bfd-5159-42b8-821c-3ae7ad0e9cb0
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-ssxts                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-no-preload-936988                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-r9bn5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-936988              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-936988     200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-rrz8t                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-936988              100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-qrtt2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-w6fbt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  107s  node-controller  Node no-preload-936988 event: Registered Node no-preload-936988 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node no-preload-936988 event: Registered Node no-preload-936988 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [2162ec4f15a028d94c963365a03ab48f17e5f2617346dd8dde6681d9ad8ff2f2] <==
	{"level":"info","ts":"2025-12-17T08:33:10.717177Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T08:33:10.717707Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T08:33:10.716805Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T08:33:10.718179Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T08:33:10.716735Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:33:10.718202Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:33:10.718385Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:33:11.000262Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T08:33:11.000613Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T08:33:11.000752Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-17T08:33:11.000777Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:33:11.000800Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T08:33:11.003583Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-17T08:33:11.003805Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:33:11.003847Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T08:33:11.003860Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-17T08:33:11.005251Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-936988 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T08:33:11.005296Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:33:11.005263Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:33:11.005518Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:33:11.005618Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T08:33:11.006840Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:33:11.007144Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:33:11.009956Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-17T08:33:11.010058Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:34:03 up  2:16,  0 user,  load average: 4.87, 4.19, 2.92
	Linux no-preload-936988 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7bd3db764b5be30e4859a6ae8be64b44887f5335de328c01c8daafb3d854fa9f] <==
	I1217 08:33:13.759869       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:33:13.760120       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 08:33:13.760301       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:33:13.760324       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:33:13.760343       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:33:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:33:13.960725       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:33:13.960783       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:33:13.960802       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:33:14.058020       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:33:14.458250       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:33:14.458289       1 metrics.go:72] Registering metrics
	I1217 08:33:14.458417       1 controller.go:711] "Syncing nftables rules"
	I1217 08:33:23.960715       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:33:23.960809       1 main.go:301] handling current node
	I1217 08:33:33.963657       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:33:33.963698       1 main.go:301] handling current node
	I1217 08:33:43.960817       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:33:43.960877       1 main.go:301] handling current node
	I1217 08:33:53.965671       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:33:53.965718       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca61f1803e341df3aa08a7f60879f9ad25ad19a3e6ea8bfdbfd5efa6968a6ab8] <==
	I1217 08:33:12.453387       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 08:33:12.453422       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 08:33:12.453465       1 cache.go:39] Caches are synced for autoregister controller
	I1217 08:33:12.453996       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:12.458166       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 08:33:12.473814       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:33:12.460831       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 08:33:12.478586       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:33:12.477183       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 08:33:12.502554       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 08:33:12.512814       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 08:33:12.514170       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1217 08:33:12.514752       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 08:33:12.544639       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:12.928861       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:33:12.980431       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:33:13.019039       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:33:13.033167       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:33:13.049976       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:33:13.112016       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.148.218"}
	I1217 08:33:13.126935       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.131.87"}
	I1217 08:33:13.340343       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 08:33:16.066513       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:33:16.167257       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:33:16.304099       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bd4cdae9d96e1e13189d3e67b2e16d8a0ee166f0ac1c8e0a1a5a26b07d42354a] <==
	I1217 08:33:15.672354       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.674362       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.674846       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.675052       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.675152       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.675162       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.675496       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.677475       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.677562       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.680008       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.680252       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.680584       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.680609       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.681381       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.682327       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.682372       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.682682       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.684801       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:33:15.688124       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.688141       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.690421       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.771401       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.771430       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 08:33:15.771436       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 08:33:15.785472       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [3b91cc9500fd20c359b66af0d72ce511742a4a4dd9d972425281289ffb9c61da] <==
	I1217 08:33:13.578295       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:33:13.663822       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:33:13.764518       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:13.764575       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 08:33:13.764678       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:33:13.787656       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:33:13.787725       1 server_linux.go:136] "Using iptables Proxier"
	I1217 08:33:13.794064       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:33:13.794506       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 08:33:13.794560       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:13.795781       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:33:13.795810       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:33:13.795843       1 config.go:200] "Starting service config controller"
	I1217 08:33:13.795849       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:33:13.795894       1 config.go:309] "Starting node config controller"
	I1217 08:33:13.795905       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:33:13.795911       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:33:13.795935       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:33:13.795965       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:33:13.896906       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:33:13.896979       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:33:13.897016       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4a04dde52df9457946dc002e3ef2af5e7084f2be9788ed459eb1ca335bf1e1ae] <==
	I1217 08:33:10.892329       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:33:12.370028       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:33:12.370067       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1217 08:33:12.370087       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:33:12.370097       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:33:12.462122       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 08:33:12.462161       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:12.471355       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:12.471467       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:33:12.472549       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:33:12.493052       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:33:12.572326       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 08:33:26 no-preload-936988 kubelet[724]: E1217 08:33:26.222334     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qrtt2_kubernetes-dashboard(1452c130-d441-42c9-849b-4aad497dcca0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" podUID="1452c130-d441-42c9-849b-4aad497dcca0"
	Dec 17 08:33:27 no-preload-936988 kubelet[724]: E1217 08:33:27.223695     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" containerName="dashboard-metrics-scraper"
	Dec 17 08:33:27 no-preload-936988 kubelet[724]: I1217 08:33:27.223734     724 scope.go:122] "RemoveContainer" containerID="a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863"
	Dec 17 08:33:27 no-preload-936988 kubelet[724]: E1217 08:33:27.223941     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qrtt2_kubernetes-dashboard(1452c130-d441-42c9-849b-4aad497dcca0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" podUID="1452c130-d441-42c9-849b-4aad497dcca0"
	Dec 17 08:33:27 no-preload-936988 kubelet[724]: E1217 08:33:27.378944     724 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-936988" containerName="etcd"
	Dec 17 08:33:28 no-preload-936988 kubelet[724]: E1217 08:33:28.226733     724 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-936988" containerName="etcd"
	Dec 17 08:33:28 no-preload-936988 kubelet[724]: E1217 08:33:28.226872     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" containerName="dashboard-metrics-scraper"
	Dec 17 08:33:28 no-preload-936988 kubelet[724]: I1217 08:33:28.226898     724 scope.go:122] "RemoveContainer" containerID="a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863"
	Dec 17 08:33:28 no-preload-936988 kubelet[724]: E1217 08:33:28.227070     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qrtt2_kubernetes-dashboard(1452c130-d441-42c9-849b-4aad497dcca0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" podUID="1452c130-d441-42c9-849b-4aad497dcca0"
	Dec 17 08:33:29 no-preload-936988 kubelet[724]: E1217 08:33:29.755686     724 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-936988" containerName="kube-controller-manager"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: E1217 08:33:38.128044     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" containerName="dashboard-metrics-scraper"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: I1217 08:33:38.128099     724 scope.go:122] "RemoveContainer" containerID="a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: I1217 08:33:38.254457     724 scope.go:122] "RemoveContainer" containerID="a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: E1217 08:33:38.254743     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" containerName="dashboard-metrics-scraper"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: I1217 08:33:38.254781     724 scope.go:122] "RemoveContainer" containerID="f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: E1217 08:33:38.254994     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qrtt2_kubernetes-dashboard(1452c130-d441-42c9-849b-4aad497dcca0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" podUID="1452c130-d441-42c9-849b-4aad497dcca0"
	Dec 17 08:33:44 no-preload-936988 kubelet[724]: I1217 08:33:44.271411     724 scope.go:122] "RemoveContainer" containerID="19ff69d0515d0cd0446279a3b3fd5791c1422ce192e7952664ecab686fab9e8d"
	Dec 17 08:33:46 no-preload-936988 kubelet[724]: E1217 08:33:46.267191     724 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ssxts" containerName="coredns"
	Dec 17 08:33:48 no-preload-936988 kubelet[724]: E1217 08:33:48.158715     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" containerName="dashboard-metrics-scraper"
	Dec 17 08:33:48 no-preload-936988 kubelet[724]: I1217 08:33:48.158754     724 scope.go:122] "RemoveContainer" containerID="f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9"
	Dec 17 08:33:48 no-preload-936988 kubelet[724]: E1217 08:33:48.158931     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qrtt2_kubernetes-dashboard(1452c130-d441-42c9-849b-4aad497dcca0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" podUID="1452c130-d441-42c9-849b-4aad497dcca0"
	Dec 17 08:33:59 no-preload-936988 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:33:59 no-preload-936988 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:33:59 no-preload-936988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 08:33:59 no-preload-936988 systemd[1]: kubelet.service: Consumed 1.827s CPU time.
	
	
	==> kubernetes-dashboard [949dd6d69e25f469760e017a43ec1a380bc823b7fbf36261ee8497cb8b8e1b19] <==
	2025/12/17 08:33:21 Starting overwatch
	2025/12/17 08:33:21 Using namespace: kubernetes-dashboard
	2025/12/17 08:33:21 Using in-cluster config to connect to apiserver
	2025/12/17 08:33:21 Using secret token for csrf signing
	2025/12/17 08:33:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 08:33:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 08:33:21 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/17 08:33:21 Generating JWE encryption key
	2025/12/17 08:33:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 08:33:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 08:33:21 Initializing JWE encryption key from synchronized object
	2025/12/17 08:33:21 Creating in-cluster Sidecar client
	2025/12/17 08:33:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:21 Serving insecurely on HTTP port: 9090
	2025/12/17 08:33:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [19ff69d0515d0cd0446279a3b3fd5791c1422ce192e7952664ecab686fab9e8d] <==
	I1217 08:33:13.535567       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 08:33:43.538227       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49] <==
	I1217 08:33:44.328075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:33:44.337715       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:33:44.337769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:33:44.340154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:47.796251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:52.056894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:55.655353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:58.709333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:01.732342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:01.747708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:34:01.747891       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:34:01.748082       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-936988_9ec4effa-3a65-4d59-877a-3d8d7e8cae64!
	I1217 08:34:01.748079       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"548a8188-8abf-4425-8621-70755d3b9167", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-936988_9ec4effa-3a65-4d59-877a-3d8d7e8cae64 became leader
	W1217 08:34:01.750371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:01.770991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:34:01.848484       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-936988_9ec4effa-3a65-4d59-877a-3d8d7e8cae64!
	W1217 08:34:03.774820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:03.781323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-936988 -n no-preload-936988
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-936988 -n no-preload-936988: exit status 2 (341.899024ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-936988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-936988
helpers_test.go:244: (dbg) docker inspect no-preload-936988:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2",
	        "Created": "2025-12-17T08:31:34.254013653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 891081,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:33:03.264418822Z",
	            "FinishedAt": "2025-12-17T08:33:02.163066082Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/hosts",
	        "LogPath": "/var/lib/docker/containers/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2/80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2-json.log",
	        "Name": "/no-preload-936988",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-936988:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-936988",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80dedce31e64e8c87fcb37ba38826d895ca00dcb04140aa2c14343d8b28251f2",
	                "LowerDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e1688e900b4a17d90ff4bc05af840e51a24109e65a1d392cc423268280ba279b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-936988",
	                "Source": "/var/lib/docker/volumes/no-preload-936988/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-936988",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-936988",
	                "name.minikube.sigs.k8s.io": "no-preload-936988",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1bc0f0af665b6d0393e2914577b6b690ced729520c6f2f0836515f79d4797bb2",
	            "SandboxKey": "/var/run/docker/netns/1bc0f0af665b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-936988": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31552e72b7c34000bc246afc13fd33f7afa373a22fe9db1908bd57c2a71027fe",
	                    "EndpointID": "323c99fdd2f2cd16f54afa2f4d6370fd12d0ea6264d949bf96b8a179507c1c1f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "8a:21:87:9c:ac:a7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-936988",
	                        "80dedce31e64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-936988 -n no-preload-936988
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-936988 -n no-preload-936988: exit status 2 (356.139092ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-936988 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-936988 logs -n 25: (1.163664818s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p old-k8s-version-640910 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ stop    │ -p embed-certs-581631 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-936988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p no-preload-936988 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-640910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225657 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p no-preload-936988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ image   │ no-preload-936988 image list --format=json                                                                                                                                                                                                         │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p no-preload-936988 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:33:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:33:58.209245  901115 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:58.209665  901115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:58.209676  901115 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:58.209684  901115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:58.210014  901115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:58.210825  901115 out.go:368] Setting JSON to false
	I1217 08:33:58.212717  901115 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8183,"bootTime":1765952255,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:33:58.212822  901115 start.go:143] virtualization: kvm guest
	I1217 08:33:58.215300  901115 out.go:179] * [newest-cni-441323] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:33:58.216709  901115 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:33:58.216785  901115 notify.go:221] Checking for updates...
	I1217 08:33:58.219668  901115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:33:58.221003  901115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:58.222299  901115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:33:58.223911  901115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:33:58.225413  901115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:33:58.227137  901115 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:58.227234  901115 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:33:58.227330  901115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:33:58.254422  901115 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:33:58.254541  901115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:58.317710  901115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:33:58.306968846 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:58.317827  901115 docker.go:319] overlay module found
	I1217 08:33:58.319793  901115 out.go:179] * Using the docker driver based on user configuration
	I1217 08:33:58.321113  901115 start.go:309] selected driver: docker
	I1217 08:33:58.321131  901115 start.go:927] validating driver "docker" against <nil>
	I1217 08:33:58.321147  901115 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:33:58.321843  901115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:58.380013  901115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:33:58.36989995 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:58.380231  901115 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 08:33:58.380277  901115 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 08:33:58.380622  901115 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 08:33:58.382961  901115 out.go:179] * Using Docker driver with root privileges
	I1217 08:33:58.384433  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:33:58.384522  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:58.384562  901115 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 08:33:58.384682  901115 start.go:353] cluster config:
	{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:58.386396  901115 out.go:179] * Starting "newest-cni-441323" primary control-plane node in "newest-cni-441323" cluster
	I1217 08:33:58.388055  901115 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:33:58.389524  901115 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:33:58.390839  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:33:58.390896  901115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 08:33:58.390920  901115 cache.go:65] Caching tarball of preloaded images
	I1217 08:33:58.390939  901115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:33:58.391040  901115 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:33:58.391064  901115 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 08:33:58.391182  901115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:33:58.391208  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json: {Name:mkb212e9ad1aef1a5c9052a3b02de8f24d20051c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:58.412428  901115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:33:58.412455  901115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:33:58.412471  901115 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:33:58.412508  901115 start.go:360] acquireMachinesLock for newest-cni-441323: {Name:mk9498dbb1eb77dbf697c7e17cff718c09574836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:33:58.412671  901115 start.go:364] duration metric: took 136.094µs to acquireMachinesLock for "newest-cni-441323"
	I1217 08:33:58.412704  901115 start.go:93] Provisioning new machine with config: &{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:33:58.412808  901115 start.go:125] createHost starting for "" (driver="docker")
	W1217 08:33:57.088758  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:59.594277  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	I1217 08:33:58.415034  901115 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 08:33:58.415257  901115 start.go:159] libmachine.API.Create for "newest-cni-441323" (driver="docker")
	I1217 08:33:58.415290  901115 client.go:173] LocalClient.Create starting
	I1217 08:33:58.415373  901115 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 08:33:58.415413  901115 main.go:143] libmachine: Decoding PEM data...
	I1217 08:33:58.415433  901115 main.go:143] libmachine: Parsing certificate...
	I1217 08:33:58.415487  901115 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 08:33:58.415506  901115 main.go:143] libmachine: Decoding PEM data...
	I1217 08:33:58.415517  901115 main.go:143] libmachine: Parsing certificate...
	I1217 08:33:58.415864  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 08:33:58.434032  901115 cli_runner.go:211] docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 08:33:58.434113  901115 network_create.go:284] running [docker network inspect newest-cni-441323] to gather additional debugging logs...
	I1217 08:33:58.434133  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323
	W1217 08:33:58.451747  901115 cli_runner.go:211] docker network inspect newest-cni-441323 returned with exit code 1
	I1217 08:33:58.451800  901115 network_create.go:287] error running [docker network inspect newest-cni-441323]: docker network inspect newest-cni-441323: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-441323 not found
	I1217 08:33:58.451822  901115 network_create.go:289] output of [docker network inspect newest-cni-441323]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-441323 not found
	
	** /stderr **
	I1217 08:33:58.451966  901115 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:33:58.471268  901115 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-971513c2879b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:b9:48:a1:bc:14} reservation:<nil>}
	I1217 08:33:58.471897  901115 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d3a8438f2b04 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:22:9a:90:c8:31} reservation:<nil>}
	I1217 08:33:58.472477  901115 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-270f10fabfc5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:f8:c6:e8:84:c2} reservation:<nil>}
	I1217 08:33:58.473327  901115 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fd9860}
	I1217 08:33:58.473352  901115 network_create.go:124] attempt to create docker network newest-cni-441323 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 08:33:58.473406  901115 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-441323 newest-cni-441323
	I1217 08:33:58.524366  901115 network_create.go:108] docker network newest-cni-441323 192.168.76.0/24 created
	I1217 08:33:58.524402  901115 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-441323" container
	I1217 08:33:58.524477  901115 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 08:33:58.552769  901115 cli_runner.go:164] Run: docker volume create newest-cni-441323 --label name.minikube.sigs.k8s.io=newest-cni-441323 --label created_by.minikube.sigs.k8s.io=true
	I1217 08:33:58.576361  901115 oci.go:103] Successfully created a docker volume newest-cni-441323
	I1217 08:33:58.576482  901115 cli_runner.go:164] Run: docker run --rm --name newest-cni-441323-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-441323 --entrypoint /usr/bin/test -v newest-cni-441323:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 08:33:59.010485  901115 oci.go:107] Successfully prepared a docker volume newest-cni-441323
	I1217 08:33:59.010657  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:33:59.010683  901115 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 08:33:59.010786  901115 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-441323:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 08:34:03.061472  901115 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-441323:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.050619687s)
	I1217 08:34:03.061515  901115 kic.go:203] duration metric: took 4.05082754s to extract preloaded images to volume ...
	W1217 08:34:03.061647  901115 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 08:34:03.061705  901115 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 08:34:03.061761  901115 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 08:34:03.129399  901115 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-441323 --name newest-cni-441323 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-441323 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-441323 --network newest-cni-441323 --ip 192.168.76.2 --volume newest-cni-441323:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	W1217 08:34:02.089192  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	I1217 08:34:03.092339  893657 pod_ready.go:94] pod "coredns-66bc5c9577-4n72s" is "Ready"
	I1217 08:34:03.092383  893657 pod_ready.go:86] duration metric: took 33.509125537s for pod "coredns-66bc5c9577-4n72s" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.095551  893657 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.100554  893657 pod_ready.go:94] pod "etcd-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.100581  893657 pod_ready.go:86] duration metric: took 5.003785ms for pod "etcd-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.103653  893657 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.108621  893657 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.108648  893657 pod_ready.go:86] duration metric: took 4.968185ms for pod "kube-apiserver-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.111008  893657 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.288962  893657 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.288999  893657 pod_ready.go:86] duration metric: took 177.964518ms for pod "kube-controller-manager-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.488686  893657 pod_ready.go:83] waiting for pod "kube-proxy-7lhc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.888366  893657 pod_ready.go:94] pod "kube-proxy-7lhc6" is "Ready"
	I1217 08:34:03.888395  893657 pod_ready.go:86] duration metric: took 399.676499ms for pod "kube-proxy-7lhc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.088489  893657 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.488909  893657 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:04.488938  893657 pod_ready.go:86] duration metric: took 400.421537ms for pod "kube-scheduler-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.488950  893657 pod_ready.go:40] duration metric: took 34.90949592s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:34:04.541502  893657 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:34:04.543259  893657 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-225657" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 08:33:25 no-preload-936988 crio[570]: time="2025-12-17T08:33:25.259917893Z" level=info msg="Started container" PID=1757 containerID=a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper id=d0b4a56a-47c1-4359-b409-78b98ba8e605 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9436d3683f6aa9c632a2a3a54aae89413931f139956e2e8599a7834246af104
	Dec 17 08:33:26 no-preload-936988 crio[570]: time="2025-12-17T08:33:26.228578232Z" level=info msg="Removing container: 07058e7b2747e875642ce743a86e0246726f7a9360a6106c7546562c1a2d4f7f" id=bcb580d0-d9b2-49ec-b35f-c3642d2c47f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:26 no-preload-936988 crio[570]: time="2025-12-17T08:33:26.244284083Z" level=info msg="Removed container 07058e7b2747e875642ce743a86e0246726f7a9360a6106c7546562c1a2d4f7f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper" id=bcb580d0-d9b2-49ec-b35f-c3642d2c47f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.128701514Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=22c8c4ed-cc41-447a-8025-49d81f710358 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.134362545Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2ce0471e-9194-46c8-95f8-9e4049d211e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.138277805Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper" id=c6ebc101-9292-4a07-afae-780b179c7b3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.138422132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.147489915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.148189538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.183603406Z" level=info msg="Created container f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper" id=c6ebc101-9292-4a07-afae-780b179c7b3b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.184453669Z" level=info msg="Starting container: f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9" id=0db8ed76-ebd2-4f8b-832e-985718e66c39 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.187147012Z" level=info msg="Started container" PID=1767 containerID=f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper id=0db8ed76-ebd2-4f8b-832e-985718e66c39 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9436d3683f6aa9c632a2a3a54aae89413931f139956e2e8599a7834246af104
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.255995467Z" level=info msg="Removing container: a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863" id=eb7a2d47-5f08-47b1-9f93-57dfc0b19941 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:38 no-preload-936988 crio[570]: time="2025-12-17T08:33:38.268351403Z" level=info msg="Removed container a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2/dashboard-metrics-scraper" id=eb7a2d47-5f08-47b1-9f93-57dfc0b19941 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.271908551Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fef4badc-fb27-4115-bc18-cf37aa480729 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.273036789Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fbdba8b5-c7ae-4710-b47b-ba1cc61f4e1f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.274178635Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7d325542-a57b-4343-a7fe-a90a9c57ce50 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.274320545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.278679928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.278860599Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/80a2a01ac8f2a8b1cd0f58352c735d1c5400db9b550c39226e63c74b0fbab999/merged/etc/passwd: no such file or directory"
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.278895872Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/80a2a01ac8f2a8b1cd0f58352c735d1c5400db9b550c39226e63c74b0fbab999/merged/etc/group: no such file or directory"
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.279180075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.311821376Z" level=info msg="Created container c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49: kube-system/storage-provisioner/storage-provisioner" id=7d325542-a57b-4343-a7fe-a90a9c57ce50 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.312517702Z" level=info msg="Starting container: c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49" id=7e04d83b-df24-4849-af3e-e32746191797 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:44 no-preload-936988 crio[570]: time="2025-12-17T08:33:44.3146581Z" level=info msg="Started container" PID=1781 containerID=c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49 description=kube-system/storage-provisioner/storage-provisioner id=7e04d83b-df24-4849-af3e-e32746191797 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75e803de12a48bd2d73818683154ecde820816fd14ddcff186f7bf4b3493f3e1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c4688d8cd2141       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   75e803de12a48       storage-provisioner                          kube-system
	f826e28ed163d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   c9436d3683f6a       dashboard-metrics-scraper-867fb5f87b-qrtt2   kubernetes-dashboard
	949dd6d69e25f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   d5d3bea72d011       kubernetes-dashboard-b84665fb8-w6fbt         kubernetes-dashboard
	97c224539dea3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           52 seconds ago      Running             coredns                     0                   e17e752e8e9d0       coredns-7d764666f9-ssxts                     kube-system
	2b28132a279f7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   8ce32e86bf5d2       busybox                                      default
	7bd3db764b5be       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 0                   e8d473762738c       kindnet-r9bn5                                kube-system
	3b91cc9500fd2       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           52 seconds ago      Running             kube-proxy                  0                   ff1d6be8c7ef2       kube-proxy-rrz8t                             kube-system
	19ff69d0515d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   75e803de12a48       storage-provisioner                          kube-system
	4a04dde52df94       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           54 seconds ago      Running             kube-scheduler              0                   4aa2b1c59ba09       kube-scheduler-no-preload-936988             kube-system
	2162ec4f15a02       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           54 seconds ago      Running             etcd                        0                   ce7f621fa4978       etcd-no-preload-936988                       kube-system
	bd4cdae9d96e1       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           54 seconds ago      Running             kube-controller-manager     0                   d40914be0848b       kube-controller-manager-no-preload-936988    kube-system
	ca61f1803e341       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           54 seconds ago      Running             kube-apiserver              0                   237b1843dbb59       kube-apiserver-no-preload-936988             kube-system
	
	
	==> coredns [97c224539dea38a56cbb533af780aa05d37bb359cc8d088424eddccb6b15731c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44393 - 65214 "HINFO IN 3749746112953618346.5123318364958283553. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039850921s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-936988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-936988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=no-preload-936988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_32_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:32:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-936988
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:33:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:33:42 +0000   Wed, 17 Dec 2025 08:32:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:33:42 +0000   Wed, 17 Dec 2025 08:32:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:33:42 +0000   Wed, 17 Dec 2025 08:32:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:33:42 +0000   Wed, 17 Dec 2025 08:32:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-936988
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                84138bfd-5159-42b8-821c-3ae7ad0e9cb0
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-ssxts                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-no-preload-936988                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-r9bn5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-936988              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-936988     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-rrz8t                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-936988              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-qrtt2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-w6fbt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node no-preload-936988 event: Registered Node no-preload-936988 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node no-preload-936988 event: Registered Node no-preload-936988 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [2162ec4f15a028d94c963365a03ab48f17e5f2617346dd8dde6681d9ad8ff2f2] <==
	{"level":"info","ts":"2025-12-17T08:33:10.717177Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T08:33:10.717707Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T08:33:10.716805Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T08:33:10.718179Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T08:33:10.716735Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:33:10.718202Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:33:10.718385Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:33:11.000262Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T08:33:11.000613Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T08:33:11.000752Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-17T08:33:11.000777Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:33:11.000800Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T08:33:11.003583Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-17T08:33:11.003805Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:33:11.003847Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T08:33:11.003860Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-17T08:33:11.005251Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-936988 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T08:33:11.005296Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:33:11.005263Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:33:11.005518Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:33:11.005618Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T08:33:11.006840Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:33:11.007144Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:33:11.009956Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-17T08:33:11.010058Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:34:05 up  2:16,  0 user,  load average: 4.87, 4.19, 2.92
	Linux no-preload-936988 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7bd3db764b5be30e4859a6ae8be64b44887f5335de328c01c8daafb3d854fa9f] <==
	I1217 08:33:13.759869       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:33:13.760120       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 08:33:13.760301       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:33:13.760324       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:33:13.760343       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:33:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:33:13.960725       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:33:13.960783       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:33:13.960802       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:33:14.058020       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:33:14.458250       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:33:14.458289       1 metrics.go:72] Registering metrics
	I1217 08:33:14.458417       1 controller.go:711] "Syncing nftables rules"
	I1217 08:33:23.960715       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:33:23.960809       1 main.go:301] handling current node
	I1217 08:33:33.963657       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:33:33.963698       1 main.go:301] handling current node
	I1217 08:33:43.960817       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:33:43.960877       1 main.go:301] handling current node
	I1217 08:33:53.965671       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:33:53.965718       1 main.go:301] handling current node
	I1217 08:34:03.969631       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 08:34:03.969695       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca61f1803e341df3aa08a7f60879f9ad25ad19a3e6ea8bfdbfd5efa6968a6ab8] <==
	I1217 08:33:12.453387       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 08:33:12.453422       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 08:33:12.453465       1 cache.go:39] Caches are synced for autoregister controller
	I1217 08:33:12.453996       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:12.458166       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 08:33:12.473814       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:33:12.460831       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 08:33:12.478586       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:33:12.477183       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 08:33:12.502554       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 08:33:12.512814       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 08:33:12.514170       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1217 08:33:12.514752       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 08:33:12.544639       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:12.928861       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:33:12.980431       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:33:13.019039       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:33:13.033167       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:33:13.049976       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:33:13.112016       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.148.218"}
	I1217 08:33:13.126935       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.131.87"}
	I1217 08:33:13.340343       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 08:33:16.066513       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:33:16.167257       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:33:16.304099       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bd4cdae9d96e1e13189d3e67b2e16d8a0ee166f0ac1c8e0a1a5a26b07d42354a] <==
	I1217 08:33:15.672354       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.674362       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.674846       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.675052       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.675152       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.675162       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.675496       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.677475       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.677562       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.680008       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.680252       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.680584       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.680609       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.681381       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.682327       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.682372       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.682682       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.684801       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:33:15.688124       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.688141       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.690421       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.771401       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:15.771430       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 08:33:15.771436       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 08:33:15.785472       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [3b91cc9500fd20c359b66af0d72ce511742a4a4dd9d972425281289ffb9c61da] <==
	I1217 08:33:13.578295       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:33:13.663822       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:33:13.764518       1 shared_informer.go:377] "Caches are synced"
	I1217 08:33:13.764575       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 08:33:13.764678       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:33:13.787656       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:33:13.787725       1 server_linux.go:136] "Using iptables Proxier"
	I1217 08:33:13.794064       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:33:13.794506       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 08:33:13.794560       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:13.795781       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:33:13.795810       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:33:13.795843       1 config.go:200] "Starting service config controller"
	I1217 08:33:13.795849       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:33:13.795894       1 config.go:309] "Starting node config controller"
	I1217 08:33:13.795905       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:33:13.795911       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:33:13.795935       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:33:13.795965       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:33:13.896906       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:33:13.896979       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:33:13.897016       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4a04dde52df9457946dc002e3ef2af5e7084f2be9788ed459eb1ca335bf1e1ae] <==
	I1217 08:33:10.892329       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:33:12.370028       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:33:12.370067       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1217 08:33:12.370087       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:33:12.370097       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:33:12.462122       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 08:33:12.462161       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:12.471355       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:12.471467       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:33:12.472549       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:33:12.493052       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:33:12.572326       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 08:33:26 no-preload-936988 kubelet[724]: E1217 08:33:26.222334     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qrtt2_kubernetes-dashboard(1452c130-d441-42c9-849b-4aad497dcca0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" podUID="1452c130-d441-42c9-849b-4aad497dcca0"
	Dec 17 08:33:27 no-preload-936988 kubelet[724]: E1217 08:33:27.223695     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" containerName="dashboard-metrics-scraper"
	Dec 17 08:33:27 no-preload-936988 kubelet[724]: I1217 08:33:27.223734     724 scope.go:122] "RemoveContainer" containerID="a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863"
	Dec 17 08:33:27 no-preload-936988 kubelet[724]: E1217 08:33:27.223941     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qrtt2_kubernetes-dashboard(1452c130-d441-42c9-849b-4aad497dcca0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" podUID="1452c130-d441-42c9-849b-4aad497dcca0"
	Dec 17 08:33:27 no-preload-936988 kubelet[724]: E1217 08:33:27.378944     724 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-936988" containerName="etcd"
	Dec 17 08:33:28 no-preload-936988 kubelet[724]: E1217 08:33:28.226733     724 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-936988" containerName="etcd"
	Dec 17 08:33:28 no-preload-936988 kubelet[724]: E1217 08:33:28.226872     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" containerName="dashboard-metrics-scraper"
	Dec 17 08:33:28 no-preload-936988 kubelet[724]: I1217 08:33:28.226898     724 scope.go:122] "RemoveContainer" containerID="a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863"
	Dec 17 08:33:28 no-preload-936988 kubelet[724]: E1217 08:33:28.227070     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qrtt2_kubernetes-dashboard(1452c130-d441-42c9-849b-4aad497dcca0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" podUID="1452c130-d441-42c9-849b-4aad497dcca0"
	Dec 17 08:33:29 no-preload-936988 kubelet[724]: E1217 08:33:29.755686     724 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-936988" containerName="kube-controller-manager"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: E1217 08:33:38.128044     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" containerName="dashboard-metrics-scraper"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: I1217 08:33:38.128099     724 scope.go:122] "RemoveContainer" containerID="a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: I1217 08:33:38.254457     724 scope.go:122] "RemoveContainer" containerID="a1b6fbd4d046799f1fbf83f496f27d34ee1194d0dfbf9c50f1635fac3f67e863"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: E1217 08:33:38.254743     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" containerName="dashboard-metrics-scraper"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: I1217 08:33:38.254781     724 scope.go:122] "RemoveContainer" containerID="f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9"
	Dec 17 08:33:38 no-preload-936988 kubelet[724]: E1217 08:33:38.254994     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qrtt2_kubernetes-dashboard(1452c130-d441-42c9-849b-4aad497dcca0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" podUID="1452c130-d441-42c9-849b-4aad497dcca0"
	Dec 17 08:33:44 no-preload-936988 kubelet[724]: I1217 08:33:44.271411     724 scope.go:122] "RemoveContainer" containerID="19ff69d0515d0cd0446279a3b3fd5791c1422ce192e7952664ecab686fab9e8d"
	Dec 17 08:33:46 no-preload-936988 kubelet[724]: E1217 08:33:46.267191     724 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ssxts" containerName="coredns"
	Dec 17 08:33:48 no-preload-936988 kubelet[724]: E1217 08:33:48.158715     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" containerName="dashboard-metrics-scraper"
	Dec 17 08:33:48 no-preload-936988 kubelet[724]: I1217 08:33:48.158754     724 scope.go:122] "RemoveContainer" containerID="f826e28ed163df473d323981b890563ad167ae4d5ac4fdc0db1131da6055e5b9"
	Dec 17 08:33:48 no-preload-936988 kubelet[724]: E1217 08:33:48.158931     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-qrtt2_kubernetes-dashboard(1452c130-d441-42c9-849b-4aad497dcca0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-qrtt2" podUID="1452c130-d441-42c9-849b-4aad497dcca0"
	Dec 17 08:33:59 no-preload-936988 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:33:59 no-preload-936988 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:33:59 no-preload-936988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 08:33:59 no-preload-936988 systemd[1]: kubelet.service: Consumed 1.827s CPU time.
	
	
	==> kubernetes-dashboard [949dd6d69e25f469760e017a43ec1a380bc823b7fbf36261ee8497cb8b8e1b19] <==
	2025/12/17 08:33:21 Using namespace: kubernetes-dashboard
	2025/12/17 08:33:21 Using in-cluster config to connect to apiserver
	2025/12/17 08:33:21 Using secret token for csrf signing
	2025/12/17 08:33:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 08:33:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 08:33:21 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/17 08:33:21 Generating JWE encryption key
	2025/12/17 08:33:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 08:33:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 08:33:21 Initializing JWE encryption key from synchronized object
	2025/12/17 08:33:21 Creating in-cluster Sidecar client
	2025/12/17 08:33:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:21 Serving insecurely on HTTP port: 9090
	2025/12/17 08:33:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:21 Starting overwatch
	
	
	==> storage-provisioner [19ff69d0515d0cd0446279a3b3fd5791c1422ce192e7952664ecab686fab9e8d] <==
	I1217 08:33:13.535567       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 08:33:43.538227       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c4688d8cd214117064e41181611e2773b9d479f14ec7c27800aae8b84d350d49] <==
	I1217 08:33:44.328075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:33:44.337715       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:33:44.337769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:33:44.340154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:47.796251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:52.056894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:55.655353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:33:58.709333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:01.732342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:01.747708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:34:01.747891       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:34:01.748082       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-936988_9ec4effa-3a65-4d59-877a-3d8d7e8cae64!
	I1217 08:34:01.748079       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"548a8188-8abf-4425-8621-70755d3b9167", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-936988_9ec4effa-3a65-4d59-877a-3d8d7e8cae64 became leader
	W1217 08:34:01.750371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:01.770991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:34:01.848484       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-936988_9ec4effa-3a65-4d59-877a-3d8d7e8cae64!
	W1217 08:34:03.774820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:03.781323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:05.785273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:05.789121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-936988 -n no-preload-936988
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-936988 -n no-preload-936988: exit status 2 (322.997585ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-936988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-225657 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-225657 --alsologtostderr -v=1: exit status 80 (1.867847905s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-225657 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:34:16.323634  905730 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:34:16.323907  905730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:34:16.323916  905730 out.go:374] Setting ErrFile to fd 2...
	I1217 08:34:16.323920  905730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:34:16.324131  905730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:34:16.324402  905730 out.go:368] Setting JSON to false
	I1217 08:34:16.324429  905730 mustload.go:66] Loading cluster: default-k8s-diff-port-225657
	I1217 08:34:16.324854  905730 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:34:16.325249  905730 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225657 --format={{.State.Status}}
	I1217 08:34:16.344568  905730 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:34:16.344866  905730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:34:16.407970  905730 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-17 08:34:16.396167877 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:34:16.408646  905730 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-225657 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 08:34:16.410697  905730 out.go:179] * Pausing node default-k8s-diff-port-225657 ... 
	I1217 08:34:16.411939  905730 host.go:66] Checking if "default-k8s-diff-port-225657" exists ...
	I1217 08:34:16.412289  905730 ssh_runner.go:195] Run: systemctl --version
	I1217 08:34:16.412346  905730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225657
	I1217 08:34:16.432865  905730 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/default-k8s-diff-port-225657/id_ed25519 Username:docker}
	I1217 08:34:16.531510  905730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:34:16.545443  905730 pause.go:52] kubelet running: true
	I1217 08:34:16.545614  905730 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:34:16.726905  905730 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:34:16.726999  905730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:16.814970  905730 cri.go:89] found id: "11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb"
	I1217 08:34:16.814999  905730 cri.go:89] found id: "a9e1fe90bfa59a6c929b5eba24b26253597fb9733c58ea37aa95c7e8900e56f7"
	I1217 08:34:16.815005  905730 cri.go:89] found id: "0cd62d5b17f2764df551c19f52e1054e775c8e02640e841021af7c8178a15f71"
	I1217 08:34:16.815010  905730 cri.go:89] found id: "c5279ea2061e70469a71a04849896dd9d87d3b0990fe82b4954dc5ae121dea7f"
	I1217 08:34:16.815014  905730 cri.go:89] found id: "17ee730c53c472e5d0eef17a0017d6d51b3364629f79ea96528f42e992e7006b"
	I1217 08:34:16.815019  905730 cri.go:89] found id: "29406bff376a7c4d1050bad268535366dff3136cd50acef8d59f5d2cc53020a9"
	I1217 08:34:16.815023  905730 cri.go:89] found id: "f5429cbfa6cd131e89c3d06fdef6af14325ef0ea1e7bd1bdd6eb0afe6a5a0b52"
	I1217 08:34:16.815027  905730 cri.go:89] found id: "e12c965d867c6ac249f33df13a2d225cba4adb0da8040c834a0dcaba573c7610"
	I1217 08:34:16.815031  905730 cri.go:89] found id: "75f6d050456bf249fe8e7f1b9765eb60db70c90bb28d13cd7f8cf8513dba041d"
	I1217 08:34:16.815048  905730 cri.go:89] found id: "86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e"
	I1217 08:34:16.815053  905730 cri.go:89] found id: "4796a7d1b1637c1d9cdc7593b292f5b5928271259cd0bc92f255649f7bdc4917"
	I1217 08:34:16.815057  905730 cri.go:89] found id: ""
	I1217 08:34:16.815113  905730 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:16.833593  905730 retry.go:31] will retry after 232.300746ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:16Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:34:17.066744  905730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:34:17.080698  905730 pause.go:52] kubelet running: false
	I1217 08:34:17.080765  905730 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:34:17.222941  905730 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:34:17.223031  905730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:17.291210  905730 cri.go:89] found id: "11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb"
	I1217 08:34:17.291234  905730 cri.go:89] found id: "a9e1fe90bfa59a6c929b5eba24b26253597fb9733c58ea37aa95c7e8900e56f7"
	I1217 08:34:17.291239  905730 cri.go:89] found id: "0cd62d5b17f2764df551c19f52e1054e775c8e02640e841021af7c8178a15f71"
	I1217 08:34:17.291242  905730 cri.go:89] found id: "c5279ea2061e70469a71a04849896dd9d87d3b0990fe82b4954dc5ae121dea7f"
	I1217 08:34:17.291245  905730 cri.go:89] found id: "17ee730c53c472e5d0eef17a0017d6d51b3364629f79ea96528f42e992e7006b"
	I1217 08:34:17.291249  905730 cri.go:89] found id: "29406bff376a7c4d1050bad268535366dff3136cd50acef8d59f5d2cc53020a9"
	I1217 08:34:17.291252  905730 cri.go:89] found id: "f5429cbfa6cd131e89c3d06fdef6af14325ef0ea1e7bd1bdd6eb0afe6a5a0b52"
	I1217 08:34:17.291255  905730 cri.go:89] found id: "e12c965d867c6ac249f33df13a2d225cba4adb0da8040c834a0dcaba573c7610"
	I1217 08:34:17.291258  905730 cri.go:89] found id: "75f6d050456bf249fe8e7f1b9765eb60db70c90bb28d13cd7f8cf8513dba041d"
	I1217 08:34:17.291276  905730 cri.go:89] found id: "86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e"
	I1217 08:34:17.291280  905730 cri.go:89] found id: "4796a7d1b1637c1d9cdc7593b292f5b5928271259cd0bc92f255649f7bdc4917"
	I1217 08:34:17.291282  905730 cri.go:89] found id: ""
	I1217 08:34:17.291324  905730 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:17.304652  905730 retry.go:31] will retry after 525.044266ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:17Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:34:17.830388  905730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:34:17.856051  905730 pause.go:52] kubelet running: false
	I1217 08:34:17.856128  905730 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:34:18.027843  905730 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:34:18.027916  905730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:18.099744  905730 cri.go:89] found id: "11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb"
	I1217 08:34:18.099774  905730 cri.go:89] found id: "a9e1fe90bfa59a6c929b5eba24b26253597fb9733c58ea37aa95c7e8900e56f7"
	I1217 08:34:18.099785  905730 cri.go:89] found id: "0cd62d5b17f2764df551c19f52e1054e775c8e02640e841021af7c8178a15f71"
	I1217 08:34:18.099790  905730 cri.go:89] found id: "c5279ea2061e70469a71a04849896dd9d87d3b0990fe82b4954dc5ae121dea7f"
	I1217 08:34:18.099794  905730 cri.go:89] found id: "17ee730c53c472e5d0eef17a0017d6d51b3364629f79ea96528f42e992e7006b"
	I1217 08:34:18.099798  905730 cri.go:89] found id: "29406bff376a7c4d1050bad268535366dff3136cd50acef8d59f5d2cc53020a9"
	I1217 08:34:18.099802  905730 cri.go:89] found id: "f5429cbfa6cd131e89c3d06fdef6af14325ef0ea1e7bd1bdd6eb0afe6a5a0b52"
	I1217 08:34:18.099807  905730 cri.go:89] found id: "e12c965d867c6ac249f33df13a2d225cba4adb0da8040c834a0dcaba573c7610"
	I1217 08:34:18.099812  905730 cri.go:89] found id: "75f6d050456bf249fe8e7f1b9765eb60db70c90bb28d13cd7f8cf8513dba041d"
	I1217 08:34:18.099820  905730 cri.go:89] found id: "86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e"
	I1217 08:34:18.099830  905730 cri.go:89] found id: "4796a7d1b1637c1d9cdc7593b292f5b5928271259cd0bc92f255649f7bdc4917"
	I1217 08:34:18.099835  905730 cri.go:89] found id: ""
	I1217 08:34:18.099887  905730 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:18.115843  905730 out.go:203] 
	W1217 08:34:18.117366  905730 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 08:34:18.117386  905730 out.go:285] * 
	* 
	W1217 08:34:18.122207  905730 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 08:34:18.124245  905730 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-225657 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-225657
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-225657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57",
	        "Created": "2025-12-17T08:32:08.014706364Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 893892,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:33:17.05070392Z",
	            "FinishedAt": "2025-12-17T08:33:15.263109152Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/hostname",
	        "HostsPath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/hosts",
	        "LogPath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57-json.log",
	        "Name": "/default-k8s-diff-port-225657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-225657:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-225657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57",
	                "LowerDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-225657",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-225657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-225657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-225657",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-225657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "483a022e4ead22f9e9312a287d4dfcd8ea3fa815793aabf0084af12bc16d06a2",
	            "SandboxKey": "/var/run/docker/netns/483a022e4ead",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-225657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "370bb36dbd55007644b4cd15d494d08d5f62e1e604dbe8d80d1e7f9877cb1b79",
	                    "EndpointID": "d8b2bf66756980a2b687d5dc0ae1f82459da413e076c2726b661256598bf0cd9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "7a:95:2f:e7:3a:ce",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-225657",
	                        "79798ebda184"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657
E1217 08:34:18.336387  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kindnet-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657: exit status 2 (332.317563ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-225657 logs -n 25
E1217 08:34:18.546389  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-225657 logs -n 25: (1.208733547s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-640910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225657 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p no-preload-936988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ image   │ no-preload-936988 image list --format=json                                                                                                                                                                                                         │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p no-preload-936988 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p no-preload-936988                                                                                                                                                                                                                               │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ delete  │ -p no-preload-936988                                                                                                                                                                                                                               │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ default-k8s-diff-port-225657 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ pause   │ -p default-k8s-diff-port-225657 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:33:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:33:58.209245  901115 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:58.209665  901115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:58.209676  901115 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:58.209684  901115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:58.210014  901115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:58.210825  901115 out.go:368] Setting JSON to false
	I1217 08:33:58.212717  901115 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8183,"bootTime":1765952255,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:33:58.212822  901115 start.go:143] virtualization: kvm guest
	I1217 08:33:58.215300  901115 out.go:179] * [newest-cni-441323] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:33:58.216709  901115 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:33:58.216785  901115 notify.go:221] Checking for updates...
	I1217 08:33:58.219668  901115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:33:58.221003  901115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:58.222299  901115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:33:58.223911  901115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:33:58.225413  901115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:33:58.227137  901115 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:58.227234  901115 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:33:58.227330  901115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:33:58.254422  901115 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:33:58.254541  901115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:58.317710  901115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:33:58.306968846 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:58.317827  901115 docker.go:319] overlay module found
	I1217 08:33:58.319793  901115 out.go:179] * Using the docker driver based on user configuration
	I1217 08:33:58.321113  901115 start.go:309] selected driver: docker
	I1217 08:33:58.321131  901115 start.go:927] validating driver "docker" against <nil>
	I1217 08:33:58.321147  901115 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:33:58.321843  901115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:58.380013  901115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:33:58.36989995 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:58.380231  901115 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 08:33:58.380277  901115 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 08:33:58.380622  901115 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 08:33:58.382961  901115 out.go:179] * Using Docker driver with root privileges
	I1217 08:33:58.384433  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:33:58.384522  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:58.384562  901115 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 08:33:58.384682  901115 start.go:353] cluster config:
	{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:58.386396  901115 out.go:179] * Starting "newest-cni-441323" primary control-plane node in "newest-cni-441323" cluster
	I1217 08:33:58.388055  901115 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:33:58.389524  901115 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:33:58.390839  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:33:58.390896  901115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 08:33:58.390920  901115 cache.go:65] Caching tarball of preloaded images
	I1217 08:33:58.390939  901115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:33:58.391040  901115 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:33:58.391064  901115 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 08:33:58.391182  901115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:33:58.391208  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json: {Name:mkb212e9ad1aef1a5c9052a3b02de8f24d20051c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:58.412428  901115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:33:58.412455  901115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:33:58.412471  901115 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:33:58.412508  901115 start.go:360] acquireMachinesLock for newest-cni-441323: {Name:mk9498dbb1eb77dbf697c7e17cff718c09574836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:33:58.412671  901115 start.go:364] duration metric: took 136.094µs to acquireMachinesLock for "newest-cni-441323"
	I1217 08:33:58.412704  901115 start.go:93] Provisioning new machine with config: &{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:33:58.412808  901115 start.go:125] createHost starting for "" (driver="docker")
	W1217 08:33:57.088758  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:59.594277  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	I1217 08:33:58.415034  901115 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 08:33:58.415257  901115 start.go:159] libmachine.API.Create for "newest-cni-441323" (driver="docker")
	I1217 08:33:58.415290  901115 client.go:173] LocalClient.Create starting
	I1217 08:33:58.415373  901115 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 08:33:58.415413  901115 main.go:143] libmachine: Decoding PEM data...
	I1217 08:33:58.415433  901115 main.go:143] libmachine: Parsing certificate...
	I1217 08:33:58.415487  901115 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 08:33:58.415506  901115 main.go:143] libmachine: Decoding PEM data...
	I1217 08:33:58.415517  901115 main.go:143] libmachine: Parsing certificate...
	I1217 08:33:58.415864  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 08:33:58.434032  901115 cli_runner.go:211] docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 08:33:58.434113  901115 network_create.go:284] running [docker network inspect newest-cni-441323] to gather additional debugging logs...
	I1217 08:33:58.434133  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323
	W1217 08:33:58.451747  901115 cli_runner.go:211] docker network inspect newest-cni-441323 returned with exit code 1
	I1217 08:33:58.451800  901115 network_create.go:287] error running [docker network inspect newest-cni-441323]: docker network inspect newest-cni-441323: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-441323 not found
	I1217 08:33:58.451822  901115 network_create.go:289] output of [docker network inspect newest-cni-441323]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-441323 not found
	
	** /stderr **
	I1217 08:33:58.451966  901115 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:33:58.471268  901115 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-971513c2879b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:b9:48:a1:bc:14} reservation:<nil>}
	I1217 08:33:58.471897  901115 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d3a8438f2b04 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:22:9a:90:c8:31} reservation:<nil>}
	I1217 08:33:58.472477  901115 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-270f10fabfc5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:f8:c6:e8:84:c2} reservation:<nil>}
	I1217 08:33:58.473327  901115 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fd9860}
	I1217 08:33:58.473352  901115 network_create.go:124] attempt to create docker network newest-cni-441323 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 08:33:58.473406  901115 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-441323 newest-cni-441323
	I1217 08:33:58.524366  901115 network_create.go:108] docker network newest-cni-441323 192.168.76.0/24 created
	I1217 08:33:58.524402  901115 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-441323" container
	I1217 08:33:58.524477  901115 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 08:33:58.552769  901115 cli_runner.go:164] Run: docker volume create newest-cni-441323 --label name.minikube.sigs.k8s.io=newest-cni-441323 --label created_by.minikube.sigs.k8s.io=true
	I1217 08:33:58.576361  901115 oci.go:103] Successfully created a docker volume newest-cni-441323
	I1217 08:33:58.576482  901115 cli_runner.go:164] Run: docker run --rm --name newest-cni-441323-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-441323 --entrypoint /usr/bin/test -v newest-cni-441323:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 08:33:59.010485  901115 oci.go:107] Successfully prepared a docker volume newest-cni-441323
	I1217 08:33:59.010657  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:33:59.010683  901115 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 08:33:59.010786  901115 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-441323:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 08:34:03.061472  901115 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-441323:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.050619687s)
	I1217 08:34:03.061515  901115 kic.go:203] duration metric: took 4.05082754s to extract preloaded images to volume ...
	W1217 08:34:03.061647  901115 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 08:34:03.061705  901115 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 08:34:03.061761  901115 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 08:34:03.129399  901115 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-441323 --name newest-cni-441323 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-441323 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-441323 --network newest-cni-441323 --ip 192.168.76.2 --volume newest-cni-441323:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	W1217 08:34:02.089192  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	I1217 08:34:03.092339  893657 pod_ready.go:94] pod "coredns-66bc5c9577-4n72s" is "Ready"
	I1217 08:34:03.092383  893657 pod_ready.go:86] duration metric: took 33.509125537s for pod "coredns-66bc5c9577-4n72s" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.095551  893657 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.100554  893657 pod_ready.go:94] pod "etcd-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.100581  893657 pod_ready.go:86] duration metric: took 5.003785ms for pod "etcd-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.103653  893657 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.108621  893657 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.108648  893657 pod_ready.go:86] duration metric: took 4.968185ms for pod "kube-apiserver-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.111008  893657 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.288962  893657 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.288999  893657 pod_ready.go:86] duration metric: took 177.964518ms for pod "kube-controller-manager-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.488686  893657 pod_ready.go:83] waiting for pod "kube-proxy-7lhc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.888366  893657 pod_ready.go:94] pod "kube-proxy-7lhc6" is "Ready"
	I1217 08:34:03.888395  893657 pod_ready.go:86] duration metric: took 399.676499ms for pod "kube-proxy-7lhc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.088489  893657 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.488909  893657 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:04.488938  893657 pod_ready.go:86] duration metric: took 400.421537ms for pod "kube-scheduler-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.488950  893657 pod_ready.go:40] duration metric: took 34.90949592s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:34:04.541502  893657 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:34:04.543259  893657 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-225657" cluster and "default" namespace by default
	I1217 08:34:03.439306  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Running}}
	I1217 08:34:03.462526  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:03.485136  901115 cli_runner.go:164] Run: docker exec newest-cni-441323 stat /var/lib/dpkg/alternatives/iptables
	I1217 08:34:03.537250  901115 oci.go:144] the created container "newest-cni-441323" has a running status.
	I1217 08:34:03.537321  901115 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519...
	I1217 08:34:03.538963  901115 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 08:34:03.571815  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:03.595363  901115 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 08:34:03.595392  901115 kic_runner.go:114] Args: [docker exec --privileged newest-cni-441323 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 08:34:03.657761  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:03.682624  901115 machine.go:94] provisionDockerMachine start ...
	I1217 08:34:03.682736  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:03.708767  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:03.708929  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:03.708948  901115 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:34:03.709844  901115 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47126->127.0.0.1:33535: read: connection reset by peer
	I1217 08:34:06.841902  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-441323
	
	I1217 08:34:06.841936  901115 ubuntu.go:182] provisioning hostname "newest-cni-441323"
	I1217 08:34:06.842009  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:06.862406  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:06.862514  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:06.862526  901115 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-441323 && echo "newest-cni-441323" | sudo tee /etc/hostname
	I1217 08:34:07.014752  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-441323
	
	I1217 08:34:07.014828  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.035357  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:07.035481  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:07.035497  901115 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-441323' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-441323/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-441323' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:34:07.164769  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:34:07.164806  901115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:34:07.164834  901115 ubuntu.go:190] setting up certificates
	I1217 08:34:07.164847  901115 provision.go:84] configureAuth start
	I1217 08:34:07.164913  901115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:07.184359  901115 provision.go:143] copyHostCerts
	I1217 08:34:07.184423  901115 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:34:07.184440  901115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:34:07.184527  901115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:34:07.184680  901115 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:34:07.184696  901115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:34:07.184745  901115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:34:07.184876  901115 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:34:07.184890  901115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:34:07.184929  901115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:34:07.185268  901115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.newest-cni-441323 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-441323]
	I1217 08:34:07.230217  901115 provision.go:177] copyRemoteCerts
	I1217 08:34:07.230282  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:34:07.230338  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.249608  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:07.343699  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 08:34:07.365737  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:34:07.384742  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 08:34:07.404097  901115 provision.go:87] duration metric: took 239.212596ms to configureAuth
	I1217 08:34:07.404134  901115 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:34:07.404298  901115 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:07.404440  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.423488  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:07.423607  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:07.423625  901115 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:34:07.712430  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:34:07.712464  901115 machine.go:97] duration metric: took 4.029805387s to provisionDockerMachine
	I1217 08:34:07.712479  901115 client.go:176] duration metric: took 9.297180349s to LocalClient.Create
	I1217 08:34:07.712510  901115 start.go:167] duration metric: took 9.297251527s to libmachine.API.Create "newest-cni-441323"
	I1217 08:34:07.712519  901115 start.go:293] postStartSetup for "newest-cni-441323" (driver="docker")
	I1217 08:34:07.712552  901115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:34:07.712662  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:34:07.712724  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.733919  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:07.831558  901115 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:34:07.835924  901115 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:34:07.835957  901115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:34:07.835974  901115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:34:07.836053  901115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:34:07.836152  901115 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:34:07.836307  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:34:07.844933  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:34:07.867651  901115 start.go:296] duration metric: took 155.114889ms for postStartSetup
	I1217 08:34:07.867997  901115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:07.887978  901115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:34:07.888347  901115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:34:07.888420  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.915159  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:08.012705  901115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:34:08.018697  901115 start.go:128] duration metric: took 9.605868571s to createHost
	I1217 08:34:08.018729  901115 start.go:83] releasing machines lock for "newest-cni-441323", held for 9.60604277s
	I1217 08:34:08.018827  901115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:08.039980  901115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:34:08.040009  901115 ssh_runner.go:195] Run: cat /version.json
	I1217 08:34:08.040065  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:08.040090  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:08.063218  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:08.063893  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:08.213761  901115 ssh_runner.go:195] Run: systemctl --version
	I1217 08:34:08.220850  901115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:34:08.261606  901115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:34:08.266860  901115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:34:08.266921  901115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:34:08.300150  901115 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 08:34:08.300181  901115 start.go:496] detecting cgroup driver to use...
	I1217 08:34:08.300226  901115 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:34:08.300291  901115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:34:08.333261  901115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:34:08.355969  901115 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:34:08.356062  901115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:34:08.377179  901115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:34:08.397954  901115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:34:08.500475  901115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:34:08.604668  901115 docker.go:234] disabling docker service ...
	I1217 08:34:08.604733  901115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:34:08.625158  901115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:34:08.639634  901115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:34:08.737153  901115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:34:08.827805  901115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:34:08.842243  901115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:34:08.858330  901115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:34:08.858433  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.876760  901115 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:34:08.876835  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.887944  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.898553  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.910181  901115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:34:08.920514  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.930523  901115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.947013  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.958410  901115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:34:08.968015  901115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:34:08.977902  901115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:09.073664  901115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:34:09.227062  901115 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:34:09.227140  901115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:34:09.231935  901115 start.go:564] Will wait 60s for crictl version
	I1217 08:34:09.232006  901115 ssh_runner.go:195] Run: which crictl
	I1217 08:34:09.236638  901115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:34:09.264790  901115 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:34:09.264868  901115 ssh_runner.go:195] Run: crio --version
	I1217 08:34:09.296387  901115 ssh_runner.go:195] Run: crio --version
	I1217 08:34:09.330094  901115 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 08:34:09.331636  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:34:09.351160  901115 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 08:34:09.355602  901115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:34:09.368495  901115 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 08:34:09.369744  901115 kubeadm.go:884] updating cluster {Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:34:09.369922  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:34:09.369998  901115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:34:09.405950  901115 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:34:09.405968  901115 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:34:09.406008  901115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:34:09.434170  901115 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:34:09.434197  901115 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:34:09.434206  901115 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 08:34:09.434311  901115 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-441323 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:34:09.434396  901115 ssh_runner.go:195] Run: crio config
	I1217 08:34:09.487986  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:34:09.488011  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:34:09.488036  901115 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 08:34:09.488070  901115 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-441323 NodeName:newest-cni-441323 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:34:09.488225  901115 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-441323"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:34:09.488304  901115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 08:34:09.497673  901115 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:34:09.497760  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:34:09.506228  901115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 08:34:09.519933  901115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 08:34:09.537334  901115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1217 08:34:09.551357  901115 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:34:09.555461  901115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:34:09.566565  901115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:09.651316  901115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:34:09.675288  901115 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323 for IP: 192.168.76.2
	I1217 08:34:09.675316  901115 certs.go:195] generating shared ca certs ...
	I1217 08:34:09.675339  901115 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.675523  901115 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:34:09.675593  901115 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:34:09.675608  901115 certs.go:257] generating profile certs ...
	I1217 08:34:09.675704  901115 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key
	I1217 08:34:09.675734  901115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.crt with IP's: []
	I1217 08:34:09.828607  901115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.crt ...
	I1217 08:34:09.828649  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.crt: {Name:mk6803cbfa45e76f605eeea681545b33ef9b25d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.828868  901115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key ...
	I1217 08:34:09.828885  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key: {Name:mk5c01682164f25e871b88d7963d1144482cb1c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.828998  901115 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41
	I1217 08:34:09.829019  901115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 08:34:09.878711  901115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41 ...
	I1217 08:34:09.878745  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41: {Name:mkaef6099c72e1b0c65ea50d007b532d3d965141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.878936  901115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41 ...
	I1217 08:34:09.878967  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41: {Name:mk0cf2d8641e25fd59679fff6f313c110d8f0f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.879083  901115 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt
	I1217 08:34:09.879203  901115 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key
	I1217 08:34:09.879299  901115 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key
	I1217 08:34:09.879323  901115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt with IP's: []
	I1217 08:34:09.944132  901115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt ...
	I1217 08:34:09.944163  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt: {Name:mk0b1987acde9defdb8091756bfaa36ff5338b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.944336  901115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key ...
	I1217 08:34:09.944349  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key: {Name:mkfc2f20370f9d628ea83706f59262dab55769e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.944585  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:34:09.944629  901115 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:34:09.944641  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:34:09.944672  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:34:09.944697  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:34:09.944723  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:34:09.944764  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:34:09.945320  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:34:09.964167  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:34:09.982444  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:34:10.000978  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:34:10.020856  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 08:34:10.040195  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 08:34:10.059883  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:34:10.079301  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:34:10.098390  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:34:10.119913  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:34:10.139098  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:34:10.157111  901115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:34:10.170095  901115 ssh_runner.go:195] Run: openssl version
	I1217 08:34:10.176426  901115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.184186  901115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:34:10.191942  901115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.195711  901115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.195773  901115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.231702  901115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:34:10.239607  901115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5560552.pem /etc/ssl/certs/3ec20f2e.0
	I1217 08:34:10.247261  901115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.255073  901115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:34:10.262983  901115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.267054  901115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.267109  901115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.302271  901115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:34:10.310816  901115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 08:34:10.319340  901115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.327110  901115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:34:10.335023  901115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.339265  901115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.339340  901115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.374610  901115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:34:10.382782  901115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/556055.pem /etc/ssl/certs/51391683.0
	I1217 08:34:10.390828  901115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:34:10.394784  901115 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 08:34:10.394852  901115 kubeadm.go:401] StartCluster: {Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:34:10.394943  901115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:34:10.395019  901115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:34:10.424810  901115 cri.go:89] found id: ""
	I1217 08:34:10.424888  901115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:34:10.433076  901115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 08:34:10.441063  901115 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 08:34:10.441110  901115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 08:34:10.448884  901115 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 08:34:10.448906  901115 kubeadm.go:158] found existing configuration files:
	
	I1217 08:34:10.448957  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 08:34:10.456858  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 08:34:10.456924  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 08:34:10.465105  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 08:34:10.473594  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 08:34:10.473659  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 08:34:10.482079  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 08:34:10.491034  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 08:34:10.491102  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 08:34:10.499492  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 08:34:10.507794  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 08:34:10.507859  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 08:34:10.515670  901115 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 08:34:10.557013  901115 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 08:34:10.557109  901115 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 08:34:10.629084  901115 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 08:34:10.629199  901115 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 08:34:10.629303  901115 kubeadm.go:319] OS: Linux
	I1217 08:34:10.629383  901115 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 08:34:10.629463  901115 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 08:34:10.629543  901115 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 08:34:10.629656  901115 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 08:34:10.629746  901115 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 08:34:10.629828  901115 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 08:34:10.629897  901115 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 08:34:10.629949  901115 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 08:34:10.691032  901115 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 08:34:10.691132  901115 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 08:34:10.691265  901115 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 08:34:10.699576  901115 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 08:34:10.706854  901115 out.go:252]   - Generating certificates and keys ...
	I1217 08:34:10.706950  901115 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 08:34:10.707017  901115 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 08:34:10.745346  901115 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 08:34:10.782675  901115 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 08:34:10.808586  901115 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 08:34:10.866200  901115 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 08:34:10.937802  901115 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 08:34:10.938015  901115 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-441323] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 08:34:11.066458  901115 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 08:34:11.066678  901115 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-441323] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 08:34:11.167863  901115 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 08:34:11.242887  901115 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 08:34:11.388467  901115 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 08:34:11.388602  901115 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 08:34:11.472085  901115 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 08:34:11.621733  901115 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 08:34:11.650118  901115 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 08:34:11.858013  901115 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 08:34:12.025062  901115 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 08:34:12.025578  901115 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 08:34:12.031983  901115 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 08:34:12.034787  901115 out.go:252]   - Booting up control plane ...
	I1217 08:34:12.034921  901115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 08:34:12.035032  901115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 08:34:12.035109  901115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 08:34:12.053701  901115 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 08:34:12.053874  901115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 08:34:12.061199  901115 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 08:34:12.061396  901115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 08:34:12.061467  901115 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 08:34:12.160091  901115 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 08:34:12.160237  901115 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 08:34:12.661926  901115 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.8228ms
	I1217 08:34:12.666583  901115 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 08:34:12.666727  901115 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1217 08:34:12.666872  901115 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 08:34:12.666948  901115 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 08:34:13.672038  901115 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005468722s
	I1217 08:34:14.481559  901115 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.814984426s
	I1217 08:34:16.168904  901115 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502284655s
	I1217 08:34:16.186450  901115 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 08:34:16.198199  901115 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 08:34:16.209895  901115 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 08:34:16.210178  901115 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-441323 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 08:34:16.220268  901115 kubeadm.go:319] [bootstrap-token] Using token: 9ej1lh.70xuxr5pnsrao1sw
	I1217 08:34:16.221916  901115 out.go:252]   - Configuring RBAC rules ...
	I1217 08:34:16.222074  901115 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 08:34:16.227780  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 08:34:16.235400  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 08:34:16.238948  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 08:34:16.243237  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 08:34:16.246753  901115 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 08:34:16.576527  901115 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 08:34:16.994999  901115 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 08:34:17.577565  901115 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 08:34:17.578524  901115 kubeadm.go:319] 
	I1217 08:34:17.578648  901115 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 08:34:17.578658  901115 kubeadm.go:319] 
	I1217 08:34:17.578769  901115 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 08:34:17.578784  901115 kubeadm.go:319] 
	I1217 08:34:17.578806  901115 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 08:34:17.578878  901115 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 08:34:17.578965  901115 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 08:34:17.578981  901115 kubeadm.go:319] 
	I1217 08:34:17.579049  901115 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 08:34:17.579065  901115 kubeadm.go:319] 
	I1217 08:34:17.579128  901115 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 08:34:17.579136  901115 kubeadm.go:319] 
	I1217 08:34:17.579221  901115 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 08:34:17.579317  901115 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 08:34:17.579402  901115 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 08:34:17.579412  901115 kubeadm.go:319] 
	I1217 08:34:17.579522  901115 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 08:34:17.579646  901115 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 08:34:17.579657  901115 kubeadm.go:319] 
	I1217 08:34:17.579757  901115 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9ej1lh.70xuxr5pnsrao1sw \
	I1217 08:34:17.579888  901115 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 08:34:17.579922  901115 kubeadm.go:319] 	--control-plane 
	I1217 08:34:17.579931  901115 kubeadm.go:319] 
	I1217 08:34:17.580054  901115 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 08:34:17.580062  901115 kubeadm.go:319] 
	I1217 08:34:17.580155  901115 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9ej1lh.70xuxr5pnsrao1sw \
	I1217 08:34:17.580304  901115 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 08:34:17.582979  901115 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 08:34:17.583110  901115 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 08:34:17.583140  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:34:17.583153  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:34:17.585317  901115 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 08:34:17.586963  901115 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:34:17.591670  901115 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1217 08:34:17.591691  901115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:34:17.606505  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:34:17.830969  901115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:34:17.831056  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:17.831105  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-441323 minikube.k8s.io/updated_at=2025_12_17T08_34_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=newest-cni-441323 minikube.k8s.io/primary=true
	I1217 08:34:17.930527  901115 ops.go:34] apiserver oom_adj: -16
	I1217 08:34:17.930575  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 17 08:33:39 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:39.578978895Z" level=info msg="Started container" PID=1762 containerID=7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper id=fc8d2ab8-1578-43b2-b01d-c2a9a9a197b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e2cb20e7f4f33377f8f8fde2ed388e97c9e0de52fa62ff05fc91239288b38836
	Dec 17 08:33:40 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:40.541302356Z" level=info msg="Removing container: 7ddcbc26d78784b47345c89d9029ad8d733bc4535a5028ad6b3f7a0c9059e0fa" id=9d502127-012e-49fb-b0b2-b043177a33a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:40 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:40.551655397Z" level=info msg="Removed container 7ddcbc26d78784b47345c89d9029ad8d733bc4535a5028ad6b3f7a0c9059e0fa: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper" id=9d502127-012e-49fb-b0b2-b043177a33a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.594810657Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c102cada-64bc-4c99-b526-51b7732c55ad name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.596165798Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7405fb3a-2b9f-428c-a9c9-f6e9b4b52a36 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.59727963Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bde5559d-67cc-43d7-9805-8f86ff7aa464 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.597415556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.60338108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.603604236Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ae611f457d30c8c036a6665403fad881a09ccea7946a54c36d034fe137e27f01/merged/etc/passwd: no such file or directory"
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.603729971Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ae611f457d30c8c036a6665403fad881a09ccea7946a54c36d034fe137e27f01/merged/etc/group: no such file or directory"
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.604476566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.633430886Z" level=info msg="Created container 11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb: kube-system/storage-provisioner/storage-provisioner" id=bde5559d-67cc-43d7-9805-8f86ff7aa464 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.634296124Z" level=info msg="Starting container: 11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb" id=35ca1bcd-ee8a-477d-ab58-fd9c2afbe1d1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.636823055Z" level=info msg="Started container" PID=1778 containerID=11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb description=kube-system/storage-provisioner/storage-provisioner id=35ca1bcd-ee8a-477d-ab58-fd9c2afbe1d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c71ce80ebbbd7d1ee8837812b675b72026bbd38d40ac571f2d45e4f7a524a4ed
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.458749753Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e00b80a4-d8b7-4115-bb4d-7e27185bb81e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.525450902Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=84066087-485d-4d6b-9490-3343b77f3b11 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.527777254Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper" id=97f08e50-2efb-44f1-8729-175b46d14839 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.527895423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.68755638Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.688102726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.844724002Z" level=info msg="Created container 86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper" id=97f08e50-2efb-44f1-8729-175b46d14839 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.845575998Z" level=info msg="Starting container: 86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e" id=e0e766a2-cd33-43c5-985e-0db33b8cf5be name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.848473725Z" level=info msg="Started container" PID=1794 containerID=86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper id=e0e766a2-cd33-43c5-985e-0db33b8cf5be name=/runtime.v1.RuntimeService/StartContainer sandboxID=e2cb20e7f4f33377f8f8fde2ed388e97c9e0de52fa62ff05fc91239288b38836
	Dec 17 08:34:03 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:03.618640725Z" level=info msg="Removing container: 7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43" id=3508457d-3774-4531-9ec3-565ca50189e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:34:03 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:03.633040444Z" level=info msg="Removed container 7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper" id=3508457d-3774-4531-9ec3-565ca50189e5 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	86f065c8ae64e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   e2cb20e7f4f33       dashboard-metrics-scraper-6ffb444bf9-4lbbg             kubernetes-dashboard
	11a6660bcfd34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   c71ce80ebbbd7       storage-provisioner                                    kube-system
	4796a7d1b1637       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   47927a8a3b870       kubernetes-dashboard-855c9754f9-z7zjk                  kubernetes-dashboard
	a9e1fe90bfa59       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   1c8983632e520       coredns-66bc5c9577-4n72s                               kube-system
	7745cdd1cec19       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   20ece4cfbc2a0       busybox                                                default
	0cd62d5b17f27       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           50 seconds ago      Running             kube-proxy                  0                   ef0359cb2e9b5       kube-proxy-7lhc6                                       kube-system
	c5279ea2061e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   c71ce80ebbbd7       storage-provisioner                                    kube-system
	17ee730c53c47       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 0                   01870cb69d1f3       kindnet-s5z6t                                          kube-system
	29406bff376a7       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           53 seconds ago      Running             kube-apiserver              0                   b227043ef68e5       kube-apiserver-default-k8s-diff-port-225657            kube-system
	f5429cbfa6cd1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           53 seconds ago      Running             kube-controller-manager     0                   85fae2e3f5597       kube-controller-manager-default-k8s-diff-port-225657   kube-system
	e12c965d867c6       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           53 seconds ago      Running             kube-scheduler              0                   f1eab560b5bc7       kube-scheduler-default-k8s-diff-port-225657            kube-system
	75f6d050456bf       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   4c78723d09881       etcd-default-k8s-diff-port-225657                      kube-system
	
	
	==> coredns [a9e1fe90bfa59a6c929b5eba24b26253597fb9733c58ea37aa95c7e8900e56f7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44494 - 62350 "HINFO IN 5448690550659840414.6375439613802673137. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030231971s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-225657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-225657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=default-k8s-diff-port-225657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_32_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:32:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-225657
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:34:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:34:09 +0000   Wed, 17 Dec 2025 08:32:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:34:09 +0000   Wed, 17 Dec 2025 08:32:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:34:09 +0000   Wed, 17 Dec 2025 08:32:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:34:09 +0000   Wed, 17 Dec 2025 08:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-225657
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                05461791-e89b-4d46-9592-b5168df83171
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-4n72s                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-default-k8s-diff-port-225657                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-s5z6t                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-225657             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-225657    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-7lhc6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-225657             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4lbbg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z7zjk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node default-k8s-diff-port-225657 event: Registered Node default-k8s-diff-port-225657 in Controller
	  Normal  NodeReady                94s                kubelet          Node default-k8s-diff-port-225657 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node default-k8s-diff-port-225657 event: Registered Node default-k8s-diff-port-225657 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [75f6d050456bf249fe8e7f1b9765eb60db70c90bb28d13cd7f8cf8513dba041d] <==
	{"level":"warn","ts":"2025-12-17T08:33:27.320630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.328373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.335399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.343680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.351817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.360772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.369178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.376437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.385043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.393960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.403524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.412399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.419565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.428096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.437576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.445313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.453101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.474483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.482691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.493894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.536607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:34:02.812049Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"226.980784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-4n72s\" limit:1 ","response":"range_response_count:1 size:5946"}
	{"level":"info","ts":"2025-12-17T08:34:02.812142Z","caller":"traceutil/trace.go:172","msg":"trace[1628781002] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-4n72s; range_end:; response_count:1; response_revision:618; }","duration":"227.083982ms","start":"2025-12-17T08:34:02.585044Z","end":"2025-12-17T08:34:02.812128Z","steps":["trace[1628781002] 'range keys from in-memory index tree'  (duration: 226.782808ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:34:02.811983Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.920608ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:34:02.812246Z","caller":"traceutil/trace.go:172","msg":"trace[937819424] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:618; }","duration":"106.204137ms","start":"2025-12-17T08:34:02.706034Z","end":"2025-12-17T08:34:02.812238Z","steps":["trace[937819424] 'range keys from in-memory index tree'  (duration: 105.844798ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:34:19 up  2:16,  0 user,  load average: 4.45, 4.13, 2.92
	Linux default-k8s-diff-port-225657 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [17ee730c53c472e5d0eef17a0017d6d51b3364629f79ea96528f42e992e7006b] <==
	I1217 08:33:29.091767       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:33:29.092058       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 08:33:29.092233       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:33:29.092262       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:33:29.092288       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:33:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:33:29.293620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:33:29.293758       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:33:29.293782       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:33:29.293927       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:33:29.689751       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:33:29.689823       1 metrics.go:72] Registering metrics
	I1217 08:33:29.689880       1 controller.go:711] "Syncing nftables rules"
	I1217 08:33:39.292676       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:33:39.292738       1 main.go:301] handling current node
	I1217 08:33:49.294671       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:33:49.294711       1 main.go:301] handling current node
	I1217 08:33:59.292691       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:33:59.292742       1 main.go:301] handling current node
	I1217 08:34:09.292667       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:34:09.292733       1 main.go:301] handling current node
	I1217 08:34:19.299431       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:34:19.299476       1 main.go:301] handling current node
	
	
	==> kube-apiserver [29406bff376a7c4d1050bad268535366dff3136cd50acef8d59f5d2cc53020a9] <==
	I1217 08:33:28.020669       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 08:33:28.020730       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 08:33:28.020786       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:33:28.020842       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 08:33:28.021388       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 08:33:28.021483       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 08:33:28.021648       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 08:33:28.027714       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 08:33:28.028025       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 08:33:28.047679       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:33:28.051249       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 08:33:28.060940       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 08:33:28.079119       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:33:28.305169       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:33:28.337090       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:33:28.360021       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:33:28.367289       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:33:28.375153       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:33:28.414796       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.34.45"}
	I1217 08:33:28.428093       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.119.251"}
	I1217 08:33:28.925214       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:33:31.423348       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:33:31.774550       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:33:31.922985       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:33:31.922985       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f5429cbfa6cd131e89c3d06fdef6af14325ef0ea1e7bd1bdd6eb0afe6a5a0b52] <==
	I1217 08:33:31.356869       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 08:33:31.360110       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 08:33:31.361424       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 08:33:31.364722       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 08:33:31.369594       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 08:33:31.369626       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 08:33:31.369660       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 08:33:31.369704       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 08:33:31.369770       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 08:33:31.369801       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 08:33:31.369845       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 08:33:31.369859       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 08:33:31.369912       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-225657"
	I1217 08:33:31.369981       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 08:33:31.369982       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:33:31.370002       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 08:33:31.370009       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 08:33:31.369999       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 08:33:31.370197       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 08:33:31.373544       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 08:33:31.381713       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:33:31.386885       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 08:33:31.389037       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 08:33:31.391412       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:33:31.404044       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [0cd62d5b17f2764df551c19f52e1054e775c8e02640e841021af7c8178a15f71] <==
	I1217 08:33:28.881150       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:33:28.964554       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:33:29.064761       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:33:29.064813       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 08:33:29.064925       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:33:29.087906       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:33:29.087979       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:33:29.094287       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:33:29.095310       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:33:29.095424       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:29.097118       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:33:29.097146       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:33:29.097177       1 config.go:200] "Starting service config controller"
	I1217 08:33:29.097187       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:33:29.097287       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:33:29.097295       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:33:29.097294       1 config.go:309] "Starting node config controller"
	I1217 08:33:29.097855       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:33:29.097872       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:33:29.197323       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:33:29.197362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:33:29.199937       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e12c965d867c6ac249f33df13a2d225cba4adb0da8040c834a0dcaba573c7610] <==
	I1217 08:33:26.915442       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:33:27.956712       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:33:27.956834       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:33:27.956850       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:33:27.956860       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:33:27.984767       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 08:33:27.984813       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:27.998007       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:27.998059       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:27.999174       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:33:27.999279       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:33:28.099198       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:33:32 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:32.071182     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blc4g\" (UniqueName: \"kubernetes.io/projected/8f4db91a-5d93-4d27-aa39-e7ca5b3d6150-kube-api-access-blc4g\") pod \"dashboard-metrics-scraper-6ffb444bf9-4lbbg\" (UID: \"8f4db91a-5d93-4d27-aa39-e7ca5b3d6150\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg"
	Dec 17 08:33:32 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:32.071244     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmrlz\" (UniqueName: \"kubernetes.io/projected/47523a42-5d46-4c4f-be74-564902ca582a-kube-api-access-gmrlz\") pod \"kubernetes-dashboard-855c9754f9-z7zjk\" (UID: \"47523a42-5d46-4c4f-be74-564902ca582a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z7zjk"
	Dec 17 08:33:32 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:32.071269     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8f4db91a-5d93-4d27-aa39-e7ca5b3d6150-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4lbbg\" (UID: \"8f4db91a-5d93-4d27-aa39-e7ca5b3d6150\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg"
	Dec 17 08:33:32 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:32.071292     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/47523a42-5d46-4c4f-be74-564902ca582a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-z7zjk\" (UID: \"47523a42-5d46-4c4f-be74-564902ca582a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z7zjk"
	Dec 17 08:33:32 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:32.715993     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 08:33:36 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:36.540356     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z7zjk" podStartSLOduration=1.613429096 podStartE2EDuration="5.540329639s" podCreationTimestamp="2025-12-17 08:33:31 +0000 UTC" firstStartedPulling="2025-12-17 08:33:32.322810233 +0000 UTC m=+6.960302818" lastFinishedPulling="2025-12-17 08:33:36.249710764 +0000 UTC m=+10.887203361" observedRunningTime="2025-12-17 08:33:36.54009462 +0000 UTC m=+11.177587223" watchObservedRunningTime="2025-12-17 08:33:36.540329639 +0000 UTC m=+11.177822242"
	Dec 17 08:33:39 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:39.535267     724 scope.go:117] "RemoveContainer" containerID="7ddcbc26d78784b47345c89d9029ad8d733bc4535a5028ad6b3f7a0c9059e0fa"
	Dec 17 08:33:40 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:40.539603     724 scope.go:117] "RemoveContainer" containerID="7ddcbc26d78784b47345c89d9029ad8d733bc4535a5028ad6b3f7a0c9059e0fa"
	Dec 17 08:33:40 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:40.539792     724 scope.go:117] "RemoveContainer" containerID="7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43"
	Dec 17 08:33:40 default-k8s-diff-port-225657 kubelet[724]: E1217 08:33:40.540017     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4lbbg_kubernetes-dashboard(8f4db91a-5d93-4d27-aa39-e7ca5b3d6150)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg" podUID="8f4db91a-5d93-4d27-aa39-e7ca5b3d6150"
	Dec 17 08:33:41 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:41.545109     724 scope.go:117] "RemoveContainer" containerID="7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43"
	Dec 17 08:33:41 default-k8s-diff-port-225657 kubelet[724]: E1217 08:33:41.545274     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4lbbg_kubernetes-dashboard(8f4db91a-5d93-4d27-aa39-e7ca5b3d6150)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg" podUID="8f4db91a-5d93-4d27-aa39-e7ca5b3d6150"
	Dec 17 08:33:49 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:49.005742     724 scope.go:117] "RemoveContainer" containerID="7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43"
	Dec 17 08:33:49 default-k8s-diff-port-225657 kubelet[724]: E1217 08:33:49.005935     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4lbbg_kubernetes-dashboard(8f4db91a-5d93-4d27-aa39-e7ca5b3d6150)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg" podUID="8f4db91a-5d93-4d27-aa39-e7ca5b3d6150"
	Dec 17 08:33:59 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:59.594313     724 scope.go:117] "RemoveContainer" containerID="c5279ea2061e70469a71a04849896dd9d87d3b0990fe82b4954dc5ae121dea7f"
	Dec 17 08:34:02 default-k8s-diff-port-225657 kubelet[724]: I1217 08:34:02.458121     724 scope.go:117] "RemoveContainer" containerID="7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43"
	Dec 17 08:34:03 default-k8s-diff-port-225657 kubelet[724]: I1217 08:34:03.616753     724 scope.go:117] "RemoveContainer" containerID="7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43"
	Dec 17 08:34:03 default-k8s-diff-port-225657 kubelet[724]: I1217 08:34:03.617035     724 scope.go:117] "RemoveContainer" containerID="86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e"
	Dec 17 08:34:03 default-k8s-diff-port-225657 kubelet[724]: E1217 08:34:03.617264     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4lbbg_kubernetes-dashboard(8f4db91a-5d93-4d27-aa39-e7ca5b3d6150)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg" podUID="8f4db91a-5d93-4d27-aa39-e7ca5b3d6150"
	Dec 17 08:34:09 default-k8s-diff-port-225657 kubelet[724]: I1217 08:34:09.005865     724 scope.go:117] "RemoveContainer" containerID="86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e"
	Dec 17 08:34:09 default-k8s-diff-port-225657 kubelet[724]: E1217 08:34:09.006105     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4lbbg_kubernetes-dashboard(8f4db91a-5d93-4d27-aa39-e7ca5b3d6150)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg" podUID="8f4db91a-5d93-4d27-aa39-e7ca5b3d6150"
	Dec 17 08:34:16 default-k8s-diff-port-225657 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:34:16 default-k8s-diff-port-225657 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:34:16 default-k8s-diff-port-225657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 08:34:16 default-k8s-diff-port-225657 systemd[1]: kubelet.service: Consumed 1.790s CPU time.
	
	
	==> kubernetes-dashboard [4796a7d1b1637c1d9cdc7593b292f5b5928271259cd0bc92f255649f7bdc4917] <==
	2025/12/17 08:33:36 Using namespace: kubernetes-dashboard
	2025/12/17 08:33:36 Using in-cluster config to connect to apiserver
	2025/12/17 08:33:36 Using secret token for csrf signing
	2025/12/17 08:33:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 08:33:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 08:33:36 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 08:33:36 Generating JWE encryption key
	2025/12/17 08:33:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 08:33:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 08:33:36 Initializing JWE encryption key from synchronized object
	2025/12/17 08:33:36 Creating in-cluster Sidecar client
	2025/12/17 08:33:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:36 Serving insecurely on HTTP port: 9090
	2025/12/17 08:34:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:36 Starting overwatch
	
	
	==> storage-provisioner [11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb] <==
	I1217 08:33:59.652760       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:33:59.661641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:33:59.661769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:33:59.664554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:03.119765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:07.380368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:10.978714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:14.032186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:17.054641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:17.059964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:34:17.060128       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:34:17.060270       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225657_d03ab360-2208-44d9-9143-8d02c53ca3e5!
	I1217 08:34:17.060252       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0142102-aca8-44fd-b78e-ed774b3ecaf8", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-225657_d03ab360-2208-44d9-9143-8d02c53ca3e5 became leader
	W1217 08:34:17.063089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:17.066500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:34:17.160527       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225657_d03ab360-2208-44d9-9143-8d02c53ca3e5!
	W1217 08:34:19.070191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:19.075217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c5279ea2061e70469a71a04849896dd9d87d3b0990fe82b4954dc5ae121dea7f] <==
	I1217 08:33:28.837568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 08:33:58.843922       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657: exit status 2 (341.313346ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-225657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-225657
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-225657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57",
	        "Created": "2025-12-17T08:32:08.014706364Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 893892,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:33:17.05070392Z",
	            "FinishedAt": "2025-12-17T08:33:15.263109152Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/hostname",
	        "HostsPath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/hosts",
	        "LogPath": "/var/lib/docker/containers/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57/79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57-json.log",
	        "Name": "/default-k8s-diff-port-225657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-225657:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-225657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "79798ebda1845afa5bf993c57ebb2b7ef725222f8e3ac3078304d061141aff57",
	                "LowerDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/896d618ac3020eef0a21d736900b211e3e3ca7018598a76a962fbc21b86d551d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-225657",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-225657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-225657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-225657",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-225657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "483a022e4ead22f9e9312a287d4dfcd8ea3fa815793aabf0084af12bc16d06a2",
	            "SandboxKey": "/var/run/docker/netns/483a022e4ead",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-225657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "370bb36dbd55007644b4cd15d494d08d5f62e1e604dbe8d80d1e7f9877cb1b79",
	                    "EndpointID": "d8b2bf66756980a2b687d5dc0ae1f82459da413e076c2726b661256598bf0cd9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "7a:95:2f:e7:3a:ce",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-225657",
	                        "79798ebda184"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657: exit status 2 (331.03627ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-225657 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-225657 logs -n 25: (1.182311781s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-640910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225657 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p no-preload-936988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ image   │ no-preload-936988 image list --format=json                                                                                                                                                                                                         │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p no-preload-936988 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p no-preload-936988                                                                                                                                                                                                                               │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ delete  │ -p no-preload-936988                                                                                                                                                                                                                               │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ default-k8s-diff-port-225657 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ pause   │ -p default-k8s-diff-port-225657 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:33:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:33:58.209245  901115 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:58.209665  901115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:58.209676  901115 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:58.209684  901115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:58.210014  901115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:58.210825  901115 out.go:368] Setting JSON to false
	I1217 08:33:58.212717  901115 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8183,"bootTime":1765952255,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:33:58.212822  901115 start.go:143] virtualization: kvm guest
	I1217 08:33:58.215300  901115 out.go:179] * [newest-cni-441323] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:33:58.216709  901115 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:33:58.216785  901115 notify.go:221] Checking for updates...
	I1217 08:33:58.219668  901115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:33:58.221003  901115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:58.222299  901115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:33:58.223911  901115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:33:58.225413  901115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:33:58.227137  901115 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:58.227234  901115 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:33:58.227330  901115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:33:58.254422  901115 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:33:58.254541  901115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:58.317710  901115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:33:58.306968846 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:58.317827  901115 docker.go:319] overlay module found
	I1217 08:33:58.319793  901115 out.go:179] * Using the docker driver based on user configuration
	I1217 08:33:58.321113  901115 start.go:309] selected driver: docker
	I1217 08:33:58.321131  901115 start.go:927] validating driver "docker" against <nil>
	I1217 08:33:58.321147  901115 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:33:58.321843  901115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:58.380013  901115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:33:58.36989995 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:58.380231  901115 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 08:33:58.380277  901115 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 08:33:58.380622  901115 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 08:33:58.382961  901115 out.go:179] * Using Docker driver with root privileges
	I1217 08:33:58.384433  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:33:58.384522  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:58.384562  901115 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 08:33:58.384682  901115 start.go:353] cluster config:
	{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:58.386396  901115 out.go:179] * Starting "newest-cni-441323" primary control-plane node in "newest-cni-441323" cluster
	I1217 08:33:58.388055  901115 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:33:58.389524  901115 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:33:58.390839  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:33:58.390896  901115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 08:33:58.390920  901115 cache.go:65] Caching tarball of preloaded images
	I1217 08:33:58.390939  901115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:33:58.391040  901115 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:33:58.391064  901115 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 08:33:58.391182  901115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:33:58.391208  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json: {Name:mkb212e9ad1aef1a5c9052a3b02de8f24d20051c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:58.412428  901115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:33:58.412455  901115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:33:58.412471  901115 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:33:58.412508  901115 start.go:360] acquireMachinesLock for newest-cni-441323: {Name:mk9498dbb1eb77dbf697c7e17cff718c09574836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:33:58.412671  901115 start.go:364] duration metric: took 136.094µs to acquireMachinesLock for "newest-cni-441323"
	I1217 08:33:58.412704  901115 start.go:93] Provisioning new machine with config: &{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:33:58.412808  901115 start.go:125] createHost starting for "" (driver="docker")
	W1217 08:33:57.088758  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:59.594277  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	I1217 08:33:58.415034  901115 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 08:33:58.415257  901115 start.go:159] libmachine.API.Create for "newest-cni-441323" (driver="docker")
	I1217 08:33:58.415290  901115 client.go:173] LocalClient.Create starting
	I1217 08:33:58.415373  901115 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 08:33:58.415413  901115 main.go:143] libmachine: Decoding PEM data...
	I1217 08:33:58.415433  901115 main.go:143] libmachine: Parsing certificate...
	I1217 08:33:58.415487  901115 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 08:33:58.415506  901115 main.go:143] libmachine: Decoding PEM data...
	I1217 08:33:58.415517  901115 main.go:143] libmachine: Parsing certificate...
	I1217 08:33:58.415864  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 08:33:58.434032  901115 cli_runner.go:211] docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 08:33:58.434113  901115 network_create.go:284] running [docker network inspect newest-cni-441323] to gather additional debugging logs...
	I1217 08:33:58.434133  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323
	W1217 08:33:58.451747  901115 cli_runner.go:211] docker network inspect newest-cni-441323 returned with exit code 1
	I1217 08:33:58.451800  901115 network_create.go:287] error running [docker network inspect newest-cni-441323]: docker network inspect newest-cni-441323: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-441323 not found
	I1217 08:33:58.451822  901115 network_create.go:289] output of [docker network inspect newest-cni-441323]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-441323 not found
	
	** /stderr **
	I1217 08:33:58.451966  901115 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:33:58.471268  901115 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-971513c2879b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:b9:48:a1:bc:14} reservation:<nil>}
	I1217 08:33:58.471897  901115 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d3a8438f2b04 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:22:9a:90:c8:31} reservation:<nil>}
	I1217 08:33:58.472477  901115 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-270f10fabfc5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:f8:c6:e8:84:c2} reservation:<nil>}
	I1217 08:33:58.473327  901115 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fd9860}
	I1217 08:33:58.473352  901115 network_create.go:124] attempt to create docker network newest-cni-441323 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 08:33:58.473406  901115 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-441323 newest-cni-441323
	I1217 08:33:58.524366  901115 network_create.go:108] docker network newest-cni-441323 192.168.76.0/24 created
	I1217 08:33:58.524402  901115 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-441323" container
	I1217 08:33:58.524477  901115 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 08:33:58.552769  901115 cli_runner.go:164] Run: docker volume create newest-cni-441323 --label name.minikube.sigs.k8s.io=newest-cni-441323 --label created_by.minikube.sigs.k8s.io=true
	I1217 08:33:58.576361  901115 oci.go:103] Successfully created a docker volume newest-cni-441323
	I1217 08:33:58.576482  901115 cli_runner.go:164] Run: docker run --rm --name newest-cni-441323-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-441323 --entrypoint /usr/bin/test -v newest-cni-441323:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 08:33:59.010485  901115 oci.go:107] Successfully prepared a docker volume newest-cni-441323
	I1217 08:33:59.010657  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:33:59.010683  901115 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 08:33:59.010786  901115 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-441323:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 08:34:03.061472  901115 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-441323:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.050619687s)
	I1217 08:34:03.061515  901115 kic.go:203] duration metric: took 4.05082754s to extract preloaded images to volume ...
	W1217 08:34:03.061647  901115 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 08:34:03.061705  901115 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 08:34:03.061761  901115 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 08:34:03.129399  901115 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-441323 --name newest-cni-441323 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-441323 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-441323 --network newest-cni-441323 --ip 192.168.76.2 --volume newest-cni-441323:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	W1217 08:34:02.089192  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	I1217 08:34:03.092339  893657 pod_ready.go:94] pod "coredns-66bc5c9577-4n72s" is "Ready"
	I1217 08:34:03.092383  893657 pod_ready.go:86] duration metric: took 33.509125537s for pod "coredns-66bc5c9577-4n72s" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.095551  893657 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.100554  893657 pod_ready.go:94] pod "etcd-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.100581  893657 pod_ready.go:86] duration metric: took 5.003785ms for pod "etcd-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.103653  893657 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.108621  893657 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.108648  893657 pod_ready.go:86] duration metric: took 4.968185ms for pod "kube-apiserver-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.111008  893657 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.288962  893657 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.288999  893657 pod_ready.go:86] duration metric: took 177.964518ms for pod "kube-controller-manager-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.488686  893657 pod_ready.go:83] waiting for pod "kube-proxy-7lhc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.888366  893657 pod_ready.go:94] pod "kube-proxy-7lhc6" is "Ready"
	I1217 08:34:03.888395  893657 pod_ready.go:86] duration metric: took 399.676499ms for pod "kube-proxy-7lhc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.088489  893657 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.488909  893657 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:04.488938  893657 pod_ready.go:86] duration metric: took 400.421537ms for pod "kube-scheduler-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.488950  893657 pod_ready.go:40] duration metric: took 34.90949592s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:34:04.541502  893657 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:34:04.543259  893657 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-225657" cluster and "default" namespace by default
	I1217 08:34:03.439306  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Running}}
	I1217 08:34:03.462526  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:03.485136  901115 cli_runner.go:164] Run: docker exec newest-cni-441323 stat /var/lib/dpkg/alternatives/iptables
	I1217 08:34:03.537250  901115 oci.go:144] the created container "newest-cni-441323" has a running status.
	I1217 08:34:03.537321  901115 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519...
	I1217 08:34:03.538963  901115 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 08:34:03.571815  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:03.595363  901115 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 08:34:03.595392  901115 kic_runner.go:114] Args: [docker exec --privileged newest-cni-441323 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 08:34:03.657761  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:03.682624  901115 machine.go:94] provisionDockerMachine start ...
	I1217 08:34:03.682736  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:03.708767  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:03.708929  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:03.708948  901115 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:34:03.709844  901115 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47126->127.0.0.1:33535: read: connection reset by peer
	I1217 08:34:06.841902  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-441323
	
	I1217 08:34:06.841936  901115 ubuntu.go:182] provisioning hostname "newest-cni-441323"
	I1217 08:34:06.842009  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:06.862406  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:06.862514  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:06.862526  901115 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-441323 && echo "newest-cni-441323" | sudo tee /etc/hostname
	I1217 08:34:07.014752  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-441323
	
	I1217 08:34:07.014828  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.035357  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:07.035481  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:07.035497  901115 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-441323' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-441323/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-441323' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:34:07.164769  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:34:07.164806  901115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:34:07.164834  901115 ubuntu.go:190] setting up certificates
	I1217 08:34:07.164847  901115 provision.go:84] configureAuth start
	I1217 08:34:07.164913  901115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:07.184359  901115 provision.go:143] copyHostCerts
	I1217 08:34:07.184423  901115 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:34:07.184440  901115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:34:07.184527  901115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:34:07.184680  901115 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:34:07.184696  901115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:34:07.184745  901115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:34:07.184876  901115 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:34:07.184890  901115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:34:07.184929  901115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:34:07.185268  901115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.newest-cni-441323 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-441323]
	I1217 08:34:07.230217  901115 provision.go:177] copyRemoteCerts
	I1217 08:34:07.230282  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:34:07.230338  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.249608  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:07.343699  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 08:34:07.365737  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:34:07.384742  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 08:34:07.404097  901115 provision.go:87] duration metric: took 239.212596ms to configureAuth
	I1217 08:34:07.404134  901115 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:34:07.404298  901115 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:07.404440  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.423488  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:07.423607  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:07.423625  901115 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:34:07.712430  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:34:07.712464  901115 machine.go:97] duration metric: took 4.029805387s to provisionDockerMachine
	I1217 08:34:07.712479  901115 client.go:176] duration metric: took 9.297180349s to LocalClient.Create
	I1217 08:34:07.712510  901115 start.go:167] duration metric: took 9.297251527s to libmachine.API.Create "newest-cni-441323"
	I1217 08:34:07.712519  901115 start.go:293] postStartSetup for "newest-cni-441323" (driver="docker")
	I1217 08:34:07.712552  901115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:34:07.712662  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:34:07.712724  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.733919  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:07.831558  901115 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:34:07.835924  901115 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:34:07.835957  901115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:34:07.835974  901115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:34:07.836053  901115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:34:07.836152  901115 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:34:07.836307  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:34:07.844933  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:34:07.867651  901115 start.go:296] duration metric: took 155.114889ms for postStartSetup
	I1217 08:34:07.867997  901115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:07.887978  901115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:34:07.888347  901115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:34:07.888420  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.915159  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:08.012705  901115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:34:08.018697  901115 start.go:128] duration metric: took 9.605868571s to createHost
	I1217 08:34:08.018729  901115 start.go:83] releasing machines lock for "newest-cni-441323", held for 9.60604277s
	I1217 08:34:08.018827  901115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:08.039980  901115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:34:08.040009  901115 ssh_runner.go:195] Run: cat /version.json
	I1217 08:34:08.040065  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:08.040090  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:08.063218  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:08.063893  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:08.213761  901115 ssh_runner.go:195] Run: systemctl --version
	I1217 08:34:08.220850  901115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:34:08.261606  901115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:34:08.266860  901115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:34:08.266921  901115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:34:08.300150  901115 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 08:34:08.300181  901115 start.go:496] detecting cgroup driver to use...
	I1217 08:34:08.300226  901115 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:34:08.300291  901115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:34:08.333261  901115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:34:08.355969  901115 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:34:08.356062  901115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:34:08.377179  901115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:34:08.397954  901115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:34:08.500475  901115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:34:08.604668  901115 docker.go:234] disabling docker service ...
	I1217 08:34:08.604733  901115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:34:08.625158  901115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:34:08.639634  901115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:34:08.737153  901115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:34:08.827805  901115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:34:08.842243  901115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:34:08.858330  901115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:34:08.858433  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.876760  901115 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:34:08.876835  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.887944  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.898553  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.910181  901115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:34:08.920514  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.930523  901115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.947013  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.958410  901115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:34:08.968015  901115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:34:08.977902  901115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:09.073664  901115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:34:09.227062  901115 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:34:09.227140  901115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:34:09.231935  901115 start.go:564] Will wait 60s for crictl version
	I1217 08:34:09.232006  901115 ssh_runner.go:195] Run: which crictl
	I1217 08:34:09.236638  901115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:34:09.264790  901115 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:34:09.264868  901115 ssh_runner.go:195] Run: crio --version
	I1217 08:34:09.296387  901115 ssh_runner.go:195] Run: crio --version
	I1217 08:34:09.330094  901115 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 08:34:09.331636  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:34:09.351160  901115 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 08:34:09.355602  901115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:34:09.368495  901115 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 08:34:09.369744  901115 kubeadm.go:884] updating cluster {Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:34:09.369922  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:34:09.369998  901115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:34:09.405950  901115 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:34:09.405968  901115 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:34:09.406008  901115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:34:09.434170  901115 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:34:09.434197  901115 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:34:09.434206  901115 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 08:34:09.434311  901115 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-441323 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:34:09.434396  901115 ssh_runner.go:195] Run: crio config
	I1217 08:34:09.487986  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:34:09.488011  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:34:09.488036  901115 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 08:34:09.488070  901115 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-441323 NodeName:newest-cni-441323 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:34:09.488225  901115 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-441323"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:34:09.488304  901115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 08:34:09.497673  901115 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:34:09.497760  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:34:09.506228  901115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 08:34:09.519933  901115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 08:34:09.537334  901115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1217 08:34:09.551357  901115 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:34:09.555461  901115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:34:09.566565  901115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:09.651316  901115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:34:09.675288  901115 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323 for IP: 192.168.76.2
	I1217 08:34:09.675316  901115 certs.go:195] generating shared ca certs ...
	I1217 08:34:09.675339  901115 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.675523  901115 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:34:09.675593  901115 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:34:09.675608  901115 certs.go:257] generating profile certs ...
	I1217 08:34:09.675704  901115 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key
	I1217 08:34:09.675734  901115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.crt with IP's: []
	I1217 08:34:09.828607  901115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.crt ...
	I1217 08:34:09.828649  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.crt: {Name:mk6803cbfa45e76f605eeea681545b33ef9b25d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.828868  901115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key ...
	I1217 08:34:09.828885  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key: {Name:mk5c01682164f25e871b88d7963d1144482cb1c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.828998  901115 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41
	I1217 08:34:09.829019  901115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 08:34:09.878711  901115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41 ...
	I1217 08:34:09.878745  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41: {Name:mkaef6099c72e1b0c65ea50d007b532d3d965141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.878936  901115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41 ...
	I1217 08:34:09.878967  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41: {Name:mk0cf2d8641e25fd59679fff6f313c110d8f0f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.879083  901115 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt
	I1217 08:34:09.879203  901115 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key
	I1217 08:34:09.879299  901115 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key
	I1217 08:34:09.879323  901115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt with IP's: []
	I1217 08:34:09.944132  901115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt ...
	I1217 08:34:09.944163  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt: {Name:mk0b1987acde9defdb8091756bfaa36ff5338b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.944336  901115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key ...
	I1217 08:34:09.944349  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key: {Name:mkfc2f20370f9d628ea83706f59262dab55769e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.944585  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:34:09.944629  901115 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:34:09.944641  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:34:09.944672  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:34:09.944697  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:34:09.944723  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:34:09.944764  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:34:09.945320  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:34:09.964167  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:34:09.982444  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:34:10.000978  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:34:10.020856  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 08:34:10.040195  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 08:34:10.059883  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:34:10.079301  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:34:10.098390  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:34:10.119913  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:34:10.139098  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:34:10.157111  901115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:34:10.170095  901115 ssh_runner.go:195] Run: openssl version
	I1217 08:34:10.176426  901115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.184186  901115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:34:10.191942  901115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.195711  901115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.195773  901115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.231702  901115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:34:10.239607  901115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5560552.pem /etc/ssl/certs/3ec20f2e.0
	I1217 08:34:10.247261  901115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.255073  901115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:34:10.262983  901115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.267054  901115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.267109  901115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.302271  901115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:34:10.310816  901115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 08:34:10.319340  901115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.327110  901115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:34:10.335023  901115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.339265  901115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.339340  901115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.374610  901115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:34:10.382782  901115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/556055.pem /etc/ssl/certs/51391683.0
	I1217 08:34:10.390828  901115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:34:10.394784  901115 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 08:34:10.394852  901115 kubeadm.go:401] StartCluster: {Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:34:10.394943  901115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:34:10.395019  901115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:34:10.424810  901115 cri.go:89] found id: ""
	I1217 08:34:10.424888  901115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:34:10.433076  901115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 08:34:10.441063  901115 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 08:34:10.441110  901115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 08:34:10.448884  901115 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 08:34:10.448906  901115 kubeadm.go:158] found existing configuration files:
	
	I1217 08:34:10.448957  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 08:34:10.456858  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 08:34:10.456924  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 08:34:10.465105  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 08:34:10.473594  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 08:34:10.473659  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 08:34:10.482079  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 08:34:10.491034  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 08:34:10.491102  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 08:34:10.499492  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 08:34:10.507794  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 08:34:10.507859  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 08:34:10.515670  901115 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 08:34:10.557013  901115 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 08:34:10.557109  901115 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 08:34:10.629084  901115 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 08:34:10.629199  901115 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 08:34:10.629303  901115 kubeadm.go:319] OS: Linux
	I1217 08:34:10.629383  901115 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 08:34:10.629463  901115 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 08:34:10.629543  901115 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 08:34:10.629656  901115 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 08:34:10.629746  901115 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 08:34:10.629828  901115 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 08:34:10.629897  901115 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 08:34:10.629949  901115 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 08:34:10.691032  901115 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 08:34:10.691132  901115 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 08:34:10.691265  901115 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 08:34:10.699576  901115 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 08:34:10.706854  901115 out.go:252]   - Generating certificates and keys ...
	I1217 08:34:10.706950  901115 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 08:34:10.707017  901115 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 08:34:10.745346  901115 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 08:34:10.782675  901115 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 08:34:10.808586  901115 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 08:34:10.866200  901115 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 08:34:10.937802  901115 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 08:34:10.938015  901115 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-441323] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 08:34:11.066458  901115 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 08:34:11.066678  901115 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-441323] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 08:34:11.167863  901115 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 08:34:11.242887  901115 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 08:34:11.388467  901115 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 08:34:11.388602  901115 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 08:34:11.472085  901115 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 08:34:11.621733  901115 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 08:34:11.650118  901115 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 08:34:11.858013  901115 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 08:34:12.025062  901115 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 08:34:12.025578  901115 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 08:34:12.031983  901115 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 08:34:12.034787  901115 out.go:252]   - Booting up control plane ...
	I1217 08:34:12.034921  901115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 08:34:12.035032  901115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 08:34:12.035109  901115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 08:34:12.053701  901115 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 08:34:12.053874  901115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 08:34:12.061199  901115 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 08:34:12.061396  901115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 08:34:12.061467  901115 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 08:34:12.160091  901115 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 08:34:12.160237  901115 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 08:34:12.661926  901115 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.8228ms
	I1217 08:34:12.666583  901115 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 08:34:12.666727  901115 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1217 08:34:12.666872  901115 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 08:34:12.666948  901115 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 08:34:13.672038  901115 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005468722s
	I1217 08:34:14.481559  901115 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.814984426s
	I1217 08:34:16.168904  901115 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502284655s
	I1217 08:34:16.186450  901115 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 08:34:16.198199  901115 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 08:34:16.209895  901115 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 08:34:16.210178  901115 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-441323 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 08:34:16.220268  901115 kubeadm.go:319] [bootstrap-token] Using token: 9ej1lh.70xuxr5pnsrao1sw
	I1217 08:34:16.221916  901115 out.go:252]   - Configuring RBAC rules ...
	I1217 08:34:16.222074  901115 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 08:34:16.227780  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 08:34:16.235400  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 08:34:16.238948  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 08:34:16.243237  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 08:34:16.246753  901115 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 08:34:16.576527  901115 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 08:34:16.994999  901115 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 08:34:17.577565  901115 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 08:34:17.578524  901115 kubeadm.go:319] 
	I1217 08:34:17.578648  901115 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 08:34:17.578658  901115 kubeadm.go:319] 
	I1217 08:34:17.578769  901115 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 08:34:17.578784  901115 kubeadm.go:319] 
	I1217 08:34:17.578806  901115 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 08:34:17.578878  901115 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 08:34:17.578965  901115 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 08:34:17.578981  901115 kubeadm.go:319] 
	I1217 08:34:17.579049  901115 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 08:34:17.579065  901115 kubeadm.go:319] 
	I1217 08:34:17.579128  901115 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 08:34:17.579136  901115 kubeadm.go:319] 
	I1217 08:34:17.579221  901115 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 08:34:17.579317  901115 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 08:34:17.579402  901115 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 08:34:17.579412  901115 kubeadm.go:319] 
	I1217 08:34:17.579522  901115 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 08:34:17.579646  901115 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 08:34:17.579657  901115 kubeadm.go:319] 
	I1217 08:34:17.579757  901115 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9ej1lh.70xuxr5pnsrao1sw \
	I1217 08:34:17.579888  901115 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 08:34:17.579922  901115 kubeadm.go:319] 	--control-plane 
	I1217 08:34:17.579931  901115 kubeadm.go:319] 
	I1217 08:34:17.580054  901115 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 08:34:17.580062  901115 kubeadm.go:319] 
	I1217 08:34:17.580155  901115 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9ej1lh.70xuxr5pnsrao1sw \
	I1217 08:34:17.580304  901115 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 08:34:17.582979  901115 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 08:34:17.583110  901115 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 08:34:17.583140  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:34:17.583153  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:34:17.585317  901115 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 08:34:17.586963  901115 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:34:17.591670  901115 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1217 08:34:17.591691  901115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:34:17.606505  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:34:17.830969  901115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:34:17.831056  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:17.831105  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-441323 minikube.k8s.io/updated_at=2025_12_17T08_34_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=newest-cni-441323 minikube.k8s.io/primary=true
	I1217 08:34:17.930527  901115 ops.go:34] apiserver oom_adj: -16
	I1217 08:34:17.930575  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 17 08:33:39 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:39.578978895Z" level=info msg="Started container" PID=1762 containerID=7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper id=fc8d2ab8-1578-43b2-b01d-c2a9a9a197b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e2cb20e7f4f33377f8f8fde2ed388e97c9e0de52fa62ff05fc91239288b38836
	Dec 17 08:33:40 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:40.541302356Z" level=info msg="Removing container: 7ddcbc26d78784b47345c89d9029ad8d733bc4535a5028ad6b3f7a0c9059e0fa" id=9d502127-012e-49fb-b0b2-b043177a33a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:40 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:40.551655397Z" level=info msg="Removed container 7ddcbc26d78784b47345c89d9029ad8d733bc4535a5028ad6b3f7a0c9059e0fa: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper" id=9d502127-012e-49fb-b0b2-b043177a33a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.594810657Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c102cada-64bc-4c99-b526-51b7732c55ad name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.596165798Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7405fb3a-2b9f-428c-a9c9-f6e9b4b52a36 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.59727963Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bde5559d-67cc-43d7-9805-8f86ff7aa464 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.597415556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.60338108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.603604236Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ae611f457d30c8c036a6665403fad881a09ccea7946a54c36d034fe137e27f01/merged/etc/passwd: no such file or directory"
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.603729971Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ae611f457d30c8c036a6665403fad881a09ccea7946a54c36d034fe137e27f01/merged/etc/group: no such file or directory"
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.604476566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.633430886Z" level=info msg="Created container 11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb: kube-system/storage-provisioner/storage-provisioner" id=bde5559d-67cc-43d7-9805-8f86ff7aa464 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.634296124Z" level=info msg="Starting container: 11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb" id=35ca1bcd-ee8a-477d-ab58-fd9c2afbe1d1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:33:59 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:33:59.636823055Z" level=info msg="Started container" PID=1778 containerID=11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb description=kube-system/storage-provisioner/storage-provisioner id=35ca1bcd-ee8a-477d-ab58-fd9c2afbe1d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c71ce80ebbbd7d1ee8837812b675b72026bbd38d40ac571f2d45e4f7a524a4ed
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.458749753Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e00b80a4-d8b7-4115-bb4d-7e27185bb81e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.525450902Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=84066087-485d-4d6b-9490-3343b77f3b11 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.527777254Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper" id=97f08e50-2efb-44f1-8729-175b46d14839 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.527895423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.68755638Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.688102726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.844724002Z" level=info msg="Created container 86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper" id=97f08e50-2efb-44f1-8729-175b46d14839 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.845575998Z" level=info msg="Starting container: 86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e" id=e0e766a2-cd33-43c5-985e-0db33b8cf5be name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:34:02 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:02.848473725Z" level=info msg="Started container" PID=1794 containerID=86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper id=e0e766a2-cd33-43c5-985e-0db33b8cf5be name=/runtime.v1.RuntimeService/StartContainer sandboxID=e2cb20e7f4f33377f8f8fde2ed388e97c9e0de52fa62ff05fc91239288b38836
	Dec 17 08:34:03 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:03.618640725Z" level=info msg="Removing container: 7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43" id=3508457d-3774-4531-9ec3-565ca50189e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 08:34:03 default-k8s-diff-port-225657 crio[560]: time="2025-12-17T08:34:03.633040444Z" level=info msg="Removed container 7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg/dashboard-metrics-scraper" id=3508457d-3774-4531-9ec3-565ca50189e5 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	86f065c8ae64e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   e2cb20e7f4f33       dashboard-metrics-scraper-6ffb444bf9-4lbbg             kubernetes-dashboard
	11a6660bcfd34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   c71ce80ebbbd7       storage-provisioner                                    kube-system
	4796a7d1b1637       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   47927a8a3b870       kubernetes-dashboard-855c9754f9-z7zjk                  kubernetes-dashboard
	a9e1fe90bfa59       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   1c8983632e520       coredns-66bc5c9577-4n72s                               kube-system
	7745cdd1cec19       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   20ece4cfbc2a0       busybox                                                default
	0cd62d5b17f27       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           52 seconds ago      Running             kube-proxy                  0                   ef0359cb2e9b5       kube-proxy-7lhc6                                       kube-system
	c5279ea2061e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   c71ce80ebbbd7       storage-provisioner                                    kube-system
	17ee730c53c47       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 0                   01870cb69d1f3       kindnet-s5z6t                                          kube-system
	29406bff376a7       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           55 seconds ago      Running             kube-apiserver              0                   b227043ef68e5       kube-apiserver-default-k8s-diff-port-225657            kube-system
	f5429cbfa6cd1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           55 seconds ago      Running             kube-controller-manager     0                   85fae2e3f5597       kube-controller-manager-default-k8s-diff-port-225657   kube-system
	e12c965d867c6       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           55 seconds ago      Running             kube-scheduler              0                   f1eab560b5bc7       kube-scheduler-default-k8s-diff-port-225657            kube-system
	75f6d050456bf       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   4c78723d09881       etcd-default-k8s-diff-port-225657                      kube-system
	
	
	==> coredns [a9e1fe90bfa59a6c929b5eba24b26253597fb9733c58ea37aa95c7e8900e56f7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44494 - 62350 "HINFO IN 5448690550659840414.6375439613802673137. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030231971s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-225657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-225657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=default-k8s-diff-port-225657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_32_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:32:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-225657
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:34:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:34:09 +0000   Wed, 17 Dec 2025 08:32:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:34:09 +0000   Wed, 17 Dec 2025 08:32:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:34:09 +0000   Wed, 17 Dec 2025 08:32:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:34:09 +0000   Wed, 17 Dec 2025 08:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-225657
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                05461791-e89b-4d46-9592-b5168df83171
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-4n72s                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-225657                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-s5z6t                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-225657             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-225657    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-7lhc6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-225657             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4lbbg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z7zjk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-225657 event: Registered Node default-k8s-diff-port-225657 in Controller
	  Normal  NodeReady                96s                kubelet          Node default-k8s-diff-port-225657 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-225657 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node default-k8s-diff-port-225657 event: Registered Node default-k8s-diff-port-225657 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [75f6d050456bf249fe8e7f1b9765eb60db70c90bb28d13cd7f8cf8513dba041d] <==
	{"level":"warn","ts":"2025-12-17T08:33:27.320630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.328373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.335399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.343680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.351817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.360772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.369178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.376437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.385043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.393960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.403524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.412399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.419565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.428096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.437576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.445313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.453101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.474483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.482691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.493894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:33:27.536607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:34:02.812049Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"226.980784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-4n72s\" limit:1 ","response":"range_response_count:1 size:5946"}
	{"level":"info","ts":"2025-12-17T08:34:02.812142Z","caller":"traceutil/trace.go:172","msg":"trace[1628781002] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-4n72s; range_end:; response_count:1; response_revision:618; }","duration":"227.083982ms","start":"2025-12-17T08:34:02.585044Z","end":"2025-12-17T08:34:02.812128Z","steps":["trace[1628781002] 'range keys from in-memory index tree'  (duration: 226.782808ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:34:02.811983Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.920608ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:34:02.812246Z","caller":"traceutil/trace.go:172","msg":"trace[937819424] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:618; }","duration":"106.204137ms","start":"2025-12-17T08:34:02.706034Z","end":"2025-12-17T08:34:02.812238Z","steps":["trace[937819424] 'range keys from in-memory index tree'  (duration: 105.844798ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:34:21 up  2:16,  0 user,  load average: 4.45, 4.13, 2.92
	Linux default-k8s-diff-port-225657 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [17ee730c53c472e5d0eef17a0017d6d51b3364629f79ea96528f42e992e7006b] <==
	I1217 08:33:29.091767       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:33:29.092058       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 08:33:29.092233       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:33:29.092262       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:33:29.092288       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:33:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:33:29.293620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:33:29.293758       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:33:29.293782       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:33:29.293927       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:33:29.689751       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:33:29.689823       1 metrics.go:72] Registering metrics
	I1217 08:33:29.689880       1 controller.go:711] "Syncing nftables rules"
	I1217 08:33:39.292676       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:33:39.292738       1 main.go:301] handling current node
	I1217 08:33:49.294671       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:33:49.294711       1 main.go:301] handling current node
	I1217 08:33:59.292691       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:33:59.292742       1 main.go:301] handling current node
	I1217 08:34:09.292667       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:34:09.292733       1 main.go:301] handling current node
	I1217 08:34:19.299431       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 08:34:19.299476       1 main.go:301] handling current node
	
	
	==> kube-apiserver [29406bff376a7c4d1050bad268535366dff3136cd50acef8d59f5d2cc53020a9] <==
	I1217 08:33:28.020669       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 08:33:28.020730       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 08:33:28.020786       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:33:28.020842       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 08:33:28.021388       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 08:33:28.021483       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 08:33:28.021648       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 08:33:28.027714       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 08:33:28.028025       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 08:33:28.047679       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:33:28.051249       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 08:33:28.060940       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 08:33:28.079119       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:33:28.305169       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:33:28.337090       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:33:28.360021       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:33:28.367289       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:33:28.375153       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:33:28.414796       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.34.45"}
	I1217 08:33:28.428093       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.119.251"}
	I1217 08:33:28.925214       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:33:31.423348       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:33:31.774550       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:33:31.922985       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:33:31.922985       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f5429cbfa6cd131e89c3d06fdef6af14325ef0ea1e7bd1bdd6eb0afe6a5a0b52] <==
	I1217 08:33:31.356869       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 08:33:31.360110       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 08:33:31.361424       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 08:33:31.364722       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 08:33:31.369594       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 08:33:31.369626       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 08:33:31.369660       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 08:33:31.369704       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 08:33:31.369770       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 08:33:31.369801       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 08:33:31.369845       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 08:33:31.369859       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 08:33:31.369912       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-225657"
	I1217 08:33:31.369981       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 08:33:31.369982       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:33:31.370002       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 08:33:31.370009       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 08:33:31.369999       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 08:33:31.370197       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 08:33:31.373544       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 08:33:31.381713       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:33:31.386885       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 08:33:31.389037       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 08:33:31.391412       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:33:31.404044       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [0cd62d5b17f2764df551c19f52e1054e775c8e02640e841021af7c8178a15f71] <==
	I1217 08:33:28.881150       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:33:28.964554       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:33:29.064761       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:33:29.064813       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 08:33:29.064925       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:33:29.087906       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:33:29.087979       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:33:29.094287       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:33:29.095310       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:33:29.095424       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:29.097118       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:33:29.097146       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:33:29.097177       1 config.go:200] "Starting service config controller"
	I1217 08:33:29.097187       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:33:29.097287       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:33:29.097295       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:33:29.097294       1 config.go:309] "Starting node config controller"
	I1217 08:33:29.097855       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:33:29.097872       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:33:29.197323       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:33:29.197362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:33:29.199937       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e12c965d867c6ac249f33df13a2d225cba4adb0da8040c834a0dcaba573c7610] <==
	I1217 08:33:26.915442       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:33:27.956712       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:33:27.956834       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:33:27.956850       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:33:27.956860       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:33:27.984767       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 08:33:27.984813       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:33:27.998007       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:27.998059       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:33:27.999174       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:33:27.999279       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:33:28.099198       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:33:32 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:32.071182     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blc4g\" (UniqueName: \"kubernetes.io/projected/8f4db91a-5d93-4d27-aa39-e7ca5b3d6150-kube-api-access-blc4g\") pod \"dashboard-metrics-scraper-6ffb444bf9-4lbbg\" (UID: \"8f4db91a-5d93-4d27-aa39-e7ca5b3d6150\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg"
	Dec 17 08:33:32 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:32.071244     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmrlz\" (UniqueName: \"kubernetes.io/projected/47523a42-5d46-4c4f-be74-564902ca582a-kube-api-access-gmrlz\") pod \"kubernetes-dashboard-855c9754f9-z7zjk\" (UID: \"47523a42-5d46-4c4f-be74-564902ca582a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z7zjk"
	Dec 17 08:33:32 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:32.071269     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8f4db91a-5d93-4d27-aa39-e7ca5b3d6150-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4lbbg\" (UID: \"8f4db91a-5d93-4d27-aa39-e7ca5b3d6150\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg"
	Dec 17 08:33:32 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:32.071292     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/47523a42-5d46-4c4f-be74-564902ca582a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-z7zjk\" (UID: \"47523a42-5d46-4c4f-be74-564902ca582a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z7zjk"
	Dec 17 08:33:32 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:32.715993     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 08:33:36 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:36.540356     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z7zjk" podStartSLOduration=1.613429096 podStartE2EDuration="5.540329639s" podCreationTimestamp="2025-12-17 08:33:31 +0000 UTC" firstStartedPulling="2025-12-17 08:33:32.322810233 +0000 UTC m=+6.960302818" lastFinishedPulling="2025-12-17 08:33:36.249710764 +0000 UTC m=+10.887203361" observedRunningTime="2025-12-17 08:33:36.54009462 +0000 UTC m=+11.177587223" watchObservedRunningTime="2025-12-17 08:33:36.540329639 +0000 UTC m=+11.177822242"
	Dec 17 08:33:39 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:39.535267     724 scope.go:117] "RemoveContainer" containerID="7ddcbc26d78784b47345c89d9029ad8d733bc4535a5028ad6b3f7a0c9059e0fa"
	Dec 17 08:33:40 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:40.539603     724 scope.go:117] "RemoveContainer" containerID="7ddcbc26d78784b47345c89d9029ad8d733bc4535a5028ad6b3f7a0c9059e0fa"
	Dec 17 08:33:40 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:40.539792     724 scope.go:117] "RemoveContainer" containerID="7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43"
	Dec 17 08:33:40 default-k8s-diff-port-225657 kubelet[724]: E1217 08:33:40.540017     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4lbbg_kubernetes-dashboard(8f4db91a-5d93-4d27-aa39-e7ca5b3d6150)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg" podUID="8f4db91a-5d93-4d27-aa39-e7ca5b3d6150"
	Dec 17 08:33:41 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:41.545109     724 scope.go:117] "RemoveContainer" containerID="7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43"
	Dec 17 08:33:41 default-k8s-diff-port-225657 kubelet[724]: E1217 08:33:41.545274     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4lbbg_kubernetes-dashboard(8f4db91a-5d93-4d27-aa39-e7ca5b3d6150)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg" podUID="8f4db91a-5d93-4d27-aa39-e7ca5b3d6150"
	Dec 17 08:33:49 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:49.005742     724 scope.go:117] "RemoveContainer" containerID="7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43"
	Dec 17 08:33:49 default-k8s-diff-port-225657 kubelet[724]: E1217 08:33:49.005935     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4lbbg_kubernetes-dashboard(8f4db91a-5d93-4d27-aa39-e7ca5b3d6150)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg" podUID="8f4db91a-5d93-4d27-aa39-e7ca5b3d6150"
	Dec 17 08:33:59 default-k8s-diff-port-225657 kubelet[724]: I1217 08:33:59.594313     724 scope.go:117] "RemoveContainer" containerID="c5279ea2061e70469a71a04849896dd9d87d3b0990fe82b4954dc5ae121dea7f"
	Dec 17 08:34:02 default-k8s-diff-port-225657 kubelet[724]: I1217 08:34:02.458121     724 scope.go:117] "RemoveContainer" containerID="7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43"
	Dec 17 08:34:03 default-k8s-diff-port-225657 kubelet[724]: I1217 08:34:03.616753     724 scope.go:117] "RemoveContainer" containerID="7554f225acbede2b3589ea754ac99df2a80b48bdd92f541ae1e3e083a11efb43"
	Dec 17 08:34:03 default-k8s-diff-port-225657 kubelet[724]: I1217 08:34:03.617035     724 scope.go:117] "RemoveContainer" containerID="86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e"
	Dec 17 08:34:03 default-k8s-diff-port-225657 kubelet[724]: E1217 08:34:03.617264     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4lbbg_kubernetes-dashboard(8f4db91a-5d93-4d27-aa39-e7ca5b3d6150)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg" podUID="8f4db91a-5d93-4d27-aa39-e7ca5b3d6150"
	Dec 17 08:34:09 default-k8s-diff-port-225657 kubelet[724]: I1217 08:34:09.005865     724 scope.go:117] "RemoveContainer" containerID="86f065c8ae64e32294557a903b10637b5ba7876d19a1cb4165eb0d264497f57e"
	Dec 17 08:34:09 default-k8s-diff-port-225657 kubelet[724]: E1217 08:34:09.006105     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4lbbg_kubernetes-dashboard(8f4db91a-5d93-4d27-aa39-e7ca5b3d6150)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4lbbg" podUID="8f4db91a-5d93-4d27-aa39-e7ca5b3d6150"
	Dec 17 08:34:16 default-k8s-diff-port-225657 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:34:16 default-k8s-diff-port-225657 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:34:16 default-k8s-diff-port-225657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 08:34:16 default-k8s-diff-port-225657 systemd[1]: kubelet.service: Consumed 1.790s CPU time.
	
	
	==> kubernetes-dashboard [4796a7d1b1637c1d9cdc7593b292f5b5928271259cd0bc92f255649f7bdc4917] <==
	2025/12/17 08:33:36 Using namespace: kubernetes-dashboard
	2025/12/17 08:33:36 Using in-cluster config to connect to apiserver
	2025/12/17 08:33:36 Using secret token for csrf signing
	2025/12/17 08:33:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 08:33:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 08:33:36 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 08:33:36 Generating JWE encryption key
	2025/12/17 08:33:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 08:33:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 08:33:36 Initializing JWE encryption key from synchronized object
	2025/12/17 08:33:36 Creating in-cluster Sidecar client
	2025/12/17 08:33:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:36 Serving insecurely on HTTP port: 9090
	2025/12/17 08:34:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 08:33:36 Starting overwatch
	
	
	==> storage-provisioner [11a6660bcfd34c47c1859144db282c159ba15bb3da64062726ab1ab69b6eb9fb] <==
	I1217 08:33:59.652760       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:33:59.661641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:33:59.661769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:33:59.664554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:03.119765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:07.380368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:10.978714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:14.032186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:17.054641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:17.059964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:34:17.060128       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:34:17.060270       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225657_d03ab360-2208-44d9-9143-8d02c53ca3e5!
	I1217 08:34:17.060252       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0142102-aca8-44fd-b78e-ed774b3ecaf8", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-225657_d03ab360-2208-44d9-9143-8d02c53ca3e5 became leader
	W1217 08:34:17.063089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:17.066500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:34:17.160527       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225657_d03ab360-2208-44d9-9143-8d02c53ca3e5!
	W1217 08:34:19.070191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:19.075217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:21.078687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:34:21.083451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c5279ea2061e70469a71a04849896dd9d87d3b0990fe82b4954dc5ae121dea7f] <==
	I1217 08:33:28.837568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 08:33:58.843922       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657: exit status 2 (375.536077ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-225657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-441323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-441323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.668059ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-441323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-441323
helpers_test.go:244: (dbg) docker inspect newest-cni-441323:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4",
	        "Created": "2025-12-17T08:34:03.147489501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 902460,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:34:03.191262327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/hosts",
	        "LogPath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4-json.log",
	        "Name": "/newest-cni-441323",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-441323:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-441323",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4",
	                "LowerDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-441323",
	                "Source": "/var/lib/docker/volumes/newest-cni-441323/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-441323",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-441323",
	                "name.minikube.sigs.k8s.io": "newest-cni-441323",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "691d10fcfbc2584a829a96d65b900640ab9f0e8874c211b99164ef3c3641e5b7",
	            "SandboxKey": "/var/run/docker/netns/691d10fcfbc2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33535"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-441323": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c3d2e6bc2ee52a92bd5fc401c42f5bb70038c4ef50a33f6f2359c529ab511ea2",
	                    "EndpointID": "bc639a224c5fcfc6abb98a49991bfbaab699c56d7e12e163016d35d04ba812f1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "76:eb:17:a7:96:c3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-441323",
	                        "5e7ea243e76c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-441323 -n newest-cni-441323
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-441323 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-441323 logs -n 25: (1.19249759s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:32 UTC │
	│ start   │ -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225657 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:32 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p no-preload-936988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:34 UTC │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ image   │ no-preload-936988 image list --format=json                                                                                                                                                                                                         │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p no-preload-936988 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p no-preload-936988                                                                                                                                                                                                                               │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ delete  │ -p no-preload-936988                                                                                                                                                                                                                               │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ default-k8s-diff-port-225657 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ pause   │ -p default-k8s-diff-port-225657 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-225657                                                                                                                                                                                                                    │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-441323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:33:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:33:58.209245  901115 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:33:58.209665  901115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:58.209676  901115 out.go:374] Setting ErrFile to fd 2...
	I1217 08:33:58.209684  901115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:33:58.210014  901115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:33:58.210825  901115 out.go:368] Setting JSON to false
	I1217 08:33:58.212717  901115 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8183,"bootTime":1765952255,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:33:58.212822  901115 start.go:143] virtualization: kvm guest
	I1217 08:33:58.215300  901115 out.go:179] * [newest-cni-441323] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:33:58.216709  901115 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:33:58.216785  901115 notify.go:221] Checking for updates...
	I1217 08:33:58.219668  901115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:33:58.221003  901115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:33:58.222299  901115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:33:58.223911  901115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:33:58.225413  901115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:33:58.227137  901115 config.go:182] Loaded profile config "default-k8s-diff-port-225657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:33:58.227234  901115 config.go:182] Loaded profile config "no-preload-936988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:33:58.227330  901115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:33:58.254422  901115 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:33:58.254541  901115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:58.317710  901115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:33:58.306968846 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:58.317827  901115 docker.go:319] overlay module found
	I1217 08:33:58.319793  901115 out.go:179] * Using the docker driver based on user configuration
	I1217 08:33:58.321113  901115 start.go:309] selected driver: docker
	I1217 08:33:58.321131  901115 start.go:927] validating driver "docker" against <nil>
	I1217 08:33:58.321147  901115 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:33:58.321843  901115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:33:58.380013  901115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:33:58.36989995 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:33:58.380231  901115 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 08:33:58.380277  901115 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 08:33:58.380622  901115 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 08:33:58.382961  901115 out.go:179] * Using Docker driver with root privileges
	I1217 08:33:58.384433  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:33:58.384522  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:33:58.384562  901115 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 08:33:58.384682  901115 start.go:353] cluster config:
	{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:33:58.386396  901115 out.go:179] * Starting "newest-cni-441323" primary control-plane node in "newest-cni-441323" cluster
	I1217 08:33:58.388055  901115 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:33:58.389524  901115 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:33:58.390839  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:33:58.390896  901115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 08:33:58.390920  901115 cache.go:65] Caching tarball of preloaded images
	I1217 08:33:58.390939  901115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:33:58.391040  901115 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:33:58.391064  901115 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 08:33:58.391182  901115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:33:58.391208  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json: {Name:mkb212e9ad1aef1a5c9052a3b02de8f24d20051c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:33:58.412428  901115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:33:58.412455  901115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:33:58.412471  901115 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:33:58.412508  901115 start.go:360] acquireMachinesLock for newest-cni-441323: {Name:mk9498dbb1eb77dbf697c7e17cff718c09574836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:33:58.412671  901115 start.go:364] duration metric: took 136.094µs to acquireMachinesLock for "newest-cni-441323"
	I1217 08:33:58.412704  901115 start.go:93] Provisioning new machine with config: &{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:33:58.412808  901115 start.go:125] createHost starting for "" (driver="docker")
	W1217 08:33:57.088758  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	W1217 08:33:59.594277  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	I1217 08:33:58.415034  901115 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 08:33:58.415257  901115 start.go:159] libmachine.API.Create for "newest-cni-441323" (driver="docker")
	I1217 08:33:58.415290  901115 client.go:173] LocalClient.Create starting
	I1217 08:33:58.415373  901115 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem
	I1217 08:33:58.415413  901115 main.go:143] libmachine: Decoding PEM data...
	I1217 08:33:58.415433  901115 main.go:143] libmachine: Parsing certificate...
	I1217 08:33:58.415487  901115 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem
	I1217 08:33:58.415506  901115 main.go:143] libmachine: Decoding PEM data...
	I1217 08:33:58.415517  901115 main.go:143] libmachine: Parsing certificate...
	I1217 08:33:58.415864  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 08:33:58.434032  901115 cli_runner.go:211] docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 08:33:58.434113  901115 network_create.go:284] running [docker network inspect newest-cni-441323] to gather additional debugging logs...
	I1217 08:33:58.434133  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323
	W1217 08:33:58.451747  901115 cli_runner.go:211] docker network inspect newest-cni-441323 returned with exit code 1
	I1217 08:33:58.451800  901115 network_create.go:287] error running [docker network inspect newest-cni-441323]: docker network inspect newest-cni-441323: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-441323 not found
	I1217 08:33:58.451822  901115 network_create.go:289] output of [docker network inspect newest-cni-441323]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-441323 not found
	
	** /stderr **
	I1217 08:33:58.451966  901115 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:33:58.471268  901115 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-971513c2879b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:b9:48:a1:bc:14} reservation:<nil>}
	I1217 08:33:58.471897  901115 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d3a8438f2b04 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:22:9a:90:c8:31} reservation:<nil>}
	I1217 08:33:58.472477  901115 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-270f10fabfc5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:f8:c6:e8:84:c2} reservation:<nil>}
	I1217 08:33:58.473327  901115 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fd9860}
	I1217 08:33:58.473352  901115 network_create.go:124] attempt to create docker network newest-cni-441323 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 08:33:58.473406  901115 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-441323 newest-cni-441323
	I1217 08:33:58.524366  901115 network_create.go:108] docker network newest-cni-441323 192.168.76.0/24 created
	I1217 08:33:58.524402  901115 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-441323" container
	I1217 08:33:58.524477  901115 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 08:33:58.552769  901115 cli_runner.go:164] Run: docker volume create newest-cni-441323 --label name.minikube.sigs.k8s.io=newest-cni-441323 --label created_by.minikube.sigs.k8s.io=true
	I1217 08:33:58.576361  901115 oci.go:103] Successfully created a docker volume newest-cni-441323
	I1217 08:33:58.576482  901115 cli_runner.go:164] Run: docker run --rm --name newest-cni-441323-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-441323 --entrypoint /usr/bin/test -v newest-cni-441323:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 08:33:59.010485  901115 oci.go:107] Successfully prepared a docker volume newest-cni-441323
	I1217 08:33:59.010657  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:33:59.010683  901115 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 08:33:59.010786  901115 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-441323:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 08:34:03.061472  901115 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-441323:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.050619687s)
	I1217 08:34:03.061515  901115 kic.go:203] duration metric: took 4.05082754s to extract preloaded images to volume ...
	W1217 08:34:03.061647  901115 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 08:34:03.061705  901115 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 08:34:03.061761  901115 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 08:34:03.129399  901115 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-441323 --name newest-cni-441323 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-441323 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-441323 --network newest-cni-441323 --ip 192.168.76.2 --volume newest-cni-441323:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	W1217 08:34:02.089192  893657 pod_ready.go:104] pod "coredns-66bc5c9577-4n72s" is not "Ready", error: <nil>
	I1217 08:34:03.092339  893657 pod_ready.go:94] pod "coredns-66bc5c9577-4n72s" is "Ready"
	I1217 08:34:03.092383  893657 pod_ready.go:86] duration metric: took 33.509125537s for pod "coredns-66bc5c9577-4n72s" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.095551  893657 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.100554  893657 pod_ready.go:94] pod "etcd-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.100581  893657 pod_ready.go:86] duration metric: took 5.003785ms for pod "etcd-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.103653  893657 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.108621  893657 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.108648  893657 pod_ready.go:86] duration metric: took 4.968185ms for pod "kube-apiserver-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.111008  893657 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.288962  893657 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:03.288999  893657 pod_ready.go:86] duration metric: took 177.964518ms for pod "kube-controller-manager-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.488686  893657 pod_ready.go:83] waiting for pod "kube-proxy-7lhc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:03.888366  893657 pod_ready.go:94] pod "kube-proxy-7lhc6" is "Ready"
	I1217 08:34:03.888395  893657 pod_ready.go:86] duration metric: took 399.676499ms for pod "kube-proxy-7lhc6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.088489  893657 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.488909  893657 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-225657" is "Ready"
	I1217 08:34:04.488938  893657 pod_ready.go:86] duration metric: took 400.421537ms for pod "kube-scheduler-default-k8s-diff-port-225657" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:34:04.488950  893657 pod_ready.go:40] duration metric: took 34.90949592s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:34:04.541502  893657 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:34:04.543259  893657 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-225657" cluster and "default" namespace by default
	I1217 08:34:03.439306  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Running}}
	I1217 08:34:03.462526  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:03.485136  901115 cli_runner.go:164] Run: docker exec newest-cni-441323 stat /var/lib/dpkg/alternatives/iptables
	I1217 08:34:03.537250  901115 oci.go:144] the created container "newest-cni-441323" has a running status.
	I1217 08:34:03.537321  901115 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519...
	I1217 08:34:03.538963  901115 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 08:34:03.571815  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:03.595363  901115 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 08:34:03.595392  901115 kic_runner.go:114] Args: [docker exec --privileged newest-cni-441323 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 08:34:03.657761  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:03.682624  901115 machine.go:94] provisionDockerMachine start ...
	I1217 08:34:03.682736  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:03.708767  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:03.708929  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:03.708948  901115 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:34:03.709844  901115 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47126->127.0.0.1:33535: read: connection reset by peer
	I1217 08:34:06.841902  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-441323
	
	I1217 08:34:06.841936  901115 ubuntu.go:182] provisioning hostname "newest-cni-441323"
	I1217 08:34:06.842009  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:06.862406  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:06.862514  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:06.862526  901115 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-441323 && echo "newest-cni-441323" | sudo tee /etc/hostname
	I1217 08:34:07.014752  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-441323
	
	I1217 08:34:07.014828  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.035357  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:07.035481  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:07.035497  901115 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-441323' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-441323/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-441323' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:34:07.164769  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:34:07.164806  901115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:34:07.164834  901115 ubuntu.go:190] setting up certificates
	I1217 08:34:07.164847  901115 provision.go:84] configureAuth start
	I1217 08:34:07.164913  901115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:07.184359  901115 provision.go:143] copyHostCerts
	I1217 08:34:07.184423  901115 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:34:07.184440  901115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:34:07.184527  901115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:34:07.184680  901115 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:34:07.184696  901115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:34:07.184745  901115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:34:07.184876  901115 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:34:07.184890  901115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:34:07.184929  901115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:34:07.185268  901115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.newest-cni-441323 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-441323]
	I1217 08:34:07.230217  901115 provision.go:177] copyRemoteCerts
	I1217 08:34:07.230282  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:34:07.230338  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.249608  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:07.343699  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 08:34:07.365737  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:34:07.384742  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 08:34:07.404097  901115 provision.go:87] duration metric: took 239.212596ms to configureAuth
	I1217 08:34:07.404134  901115 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:34:07.404298  901115 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:07.404440  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.423488  901115 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:07.423607  901115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1217 08:34:07.423625  901115 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:34:07.712430  901115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:34:07.712464  901115 machine.go:97] duration metric: took 4.029805387s to provisionDockerMachine
	I1217 08:34:07.712479  901115 client.go:176] duration metric: took 9.297180349s to LocalClient.Create
	I1217 08:34:07.712510  901115 start.go:167] duration metric: took 9.297251527s to libmachine.API.Create "newest-cni-441323"
	I1217 08:34:07.712519  901115 start.go:293] postStartSetup for "newest-cni-441323" (driver="docker")
	I1217 08:34:07.712552  901115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:34:07.712662  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:34:07.712724  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.733919  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:07.831558  901115 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:34:07.835924  901115 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:34:07.835957  901115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:34:07.835974  901115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:34:07.836053  901115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:34:07.836152  901115 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:34:07.836307  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:34:07.844933  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:34:07.867651  901115 start.go:296] duration metric: took 155.114889ms for postStartSetup
	I1217 08:34:07.867997  901115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:07.887978  901115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:34:07.888347  901115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:34:07.888420  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:07.915159  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:08.012705  901115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:34:08.018697  901115 start.go:128] duration metric: took 9.605868571s to createHost
	I1217 08:34:08.018729  901115 start.go:83] releasing machines lock for "newest-cni-441323", held for 9.60604277s
	I1217 08:34:08.018827  901115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:08.039980  901115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:34:08.040009  901115 ssh_runner.go:195] Run: cat /version.json
	I1217 08:34:08.040065  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:08.040090  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:08.063218  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:08.063893  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:08.213761  901115 ssh_runner.go:195] Run: systemctl --version
	I1217 08:34:08.220850  901115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:34:08.261606  901115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:34:08.266860  901115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:34:08.266921  901115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:34:08.300150  901115 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 08:34:08.300181  901115 start.go:496] detecting cgroup driver to use...
	I1217 08:34:08.300226  901115 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:34:08.300291  901115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:34:08.333261  901115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:34:08.355969  901115 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:34:08.356062  901115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:34:08.377179  901115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:34:08.397954  901115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:34:08.500475  901115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:34:08.604668  901115 docker.go:234] disabling docker service ...
	I1217 08:34:08.604733  901115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:34:08.625158  901115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:34:08.639634  901115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:34:08.737153  901115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:34:08.827805  901115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:34:08.842243  901115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:34:08.858330  901115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:34:08.858433  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.876760  901115 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:34:08.876835  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.887944  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.898553  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.910181  901115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:34:08.920514  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.930523  901115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.947013  901115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:08.958410  901115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:34:08.968015  901115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:34:08.977902  901115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:09.073664  901115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:34:09.227062  901115 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:34:09.227140  901115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:34:09.231935  901115 start.go:564] Will wait 60s for crictl version
	I1217 08:34:09.232006  901115 ssh_runner.go:195] Run: which crictl
	I1217 08:34:09.236638  901115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:34:09.264790  901115 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:34:09.264868  901115 ssh_runner.go:195] Run: crio --version
	I1217 08:34:09.296387  901115 ssh_runner.go:195] Run: crio --version
	I1217 08:34:09.330094  901115 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 08:34:09.331636  901115 cli_runner.go:164] Run: docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:34:09.351160  901115 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 08:34:09.355602  901115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:34:09.368495  901115 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 08:34:09.369744  901115 kubeadm.go:884] updating cluster {Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:34:09.369922  901115 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:34:09.369998  901115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:34:09.405950  901115 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:34:09.405968  901115 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:34:09.406008  901115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:34:09.434170  901115 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:34:09.434197  901115 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:34:09.434206  901115 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 08:34:09.434311  901115 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-441323 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:34:09.434396  901115 ssh_runner.go:195] Run: crio config
	I1217 08:34:09.487986  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:34:09.488011  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:34:09.488036  901115 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 08:34:09.488070  901115 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-441323 NodeName:newest-cni-441323 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:34:09.488225  901115 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-441323"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:34:09.488304  901115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 08:34:09.497673  901115 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:34:09.497760  901115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:34:09.506228  901115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 08:34:09.519933  901115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 08:34:09.537334  901115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1217 08:34:09.551357  901115 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:34:09.555461  901115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:34:09.566565  901115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:09.651316  901115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:34:09.675288  901115 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323 for IP: 192.168.76.2
	I1217 08:34:09.675316  901115 certs.go:195] generating shared ca certs ...
	I1217 08:34:09.675339  901115 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.675523  901115 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:34:09.675593  901115 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:34:09.675608  901115 certs.go:257] generating profile certs ...
	I1217 08:34:09.675704  901115 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key
	I1217 08:34:09.675734  901115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.crt with IP's: []
	I1217 08:34:09.828607  901115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.crt ...
	I1217 08:34:09.828649  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.crt: {Name:mk6803cbfa45e76f605eeea681545b33ef9b25d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.828868  901115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key ...
	I1217 08:34:09.828885  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key: {Name:mk5c01682164f25e871b88d7963d1144482cb1c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.828998  901115 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41
	I1217 08:34:09.829019  901115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 08:34:09.878711  901115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41 ...
	I1217 08:34:09.878745  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41: {Name:mkaef6099c72e1b0c65ea50d007b532d3d965141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.878936  901115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41 ...
	I1217 08:34:09.878967  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41: {Name:mk0cf2d8641e25fd59679fff6f313c110d8f0f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.879083  901115 certs.go:382] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt.20418f41 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt
	I1217 08:34:09.879203  901115 certs.go:386] copying /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41 -> /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key
	I1217 08:34:09.879299  901115 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key
	I1217 08:34:09.879323  901115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt with IP's: []
	I1217 08:34:09.944132  901115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt ...
	I1217 08:34:09.944163  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt: {Name:mk0b1987acde9defdb8091756bfaa36ff5338b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.944336  901115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key ...
	I1217 08:34:09.944349  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key: {Name:mkfc2f20370f9d628ea83706f59262dab55769e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:09.944585  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:34:09.944629  901115 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:34:09.944641  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:34:09.944672  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:34:09.944697  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:34:09.944723  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:34:09.944764  901115 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:34:09.945320  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:34:09.964167  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:34:09.982444  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:34:10.000978  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:34:10.020856  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 08:34:10.040195  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 08:34:10.059883  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:34:10.079301  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:34:10.098390  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:34:10.119913  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:34:10.139098  901115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:34:10.157111  901115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:34:10.170095  901115 ssh_runner.go:195] Run: openssl version
	I1217 08:34:10.176426  901115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.184186  901115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:34:10.191942  901115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.195711  901115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.195773  901115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:34:10.231702  901115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:34:10.239607  901115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5560552.pem /etc/ssl/certs/3ec20f2e.0
	I1217 08:34:10.247261  901115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.255073  901115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:34:10.262983  901115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.267054  901115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.267109  901115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:10.302271  901115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:34:10.310816  901115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 08:34:10.319340  901115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.327110  901115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:34:10.335023  901115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.339265  901115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.339340  901115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:34:10.374610  901115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:34:10.382782  901115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/556055.pem /etc/ssl/certs/51391683.0
	I1217 08:34:10.390828  901115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:34:10.394784  901115 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 08:34:10.394852  901115 kubeadm.go:401] StartCluster: {Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:34:10.394943  901115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:34:10.395019  901115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:34:10.424810  901115 cri.go:89] found id: ""
	I1217 08:34:10.424888  901115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:34:10.433076  901115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 08:34:10.441063  901115 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 08:34:10.441110  901115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 08:34:10.448884  901115 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 08:34:10.448906  901115 kubeadm.go:158] found existing configuration files:
	
	I1217 08:34:10.448957  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 08:34:10.456858  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 08:34:10.456924  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 08:34:10.465105  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 08:34:10.473594  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 08:34:10.473659  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 08:34:10.482079  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 08:34:10.491034  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 08:34:10.491102  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 08:34:10.499492  901115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 08:34:10.507794  901115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 08:34:10.507859  901115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 08:34:10.515670  901115 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 08:34:10.557013  901115 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 08:34:10.557109  901115 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 08:34:10.629084  901115 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 08:34:10.629199  901115 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 08:34:10.629303  901115 kubeadm.go:319] OS: Linux
	I1217 08:34:10.629383  901115 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 08:34:10.629463  901115 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 08:34:10.629543  901115 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 08:34:10.629656  901115 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 08:34:10.629746  901115 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 08:34:10.629828  901115 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 08:34:10.629897  901115 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 08:34:10.629949  901115 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 08:34:10.691032  901115 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 08:34:10.691132  901115 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 08:34:10.691265  901115 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 08:34:10.699576  901115 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 08:34:10.706854  901115 out.go:252]   - Generating certificates and keys ...
	I1217 08:34:10.706950  901115 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 08:34:10.707017  901115 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 08:34:10.745346  901115 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 08:34:10.782675  901115 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 08:34:10.808586  901115 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 08:34:10.866200  901115 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 08:34:10.937802  901115 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 08:34:10.938015  901115 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-441323] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 08:34:11.066458  901115 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 08:34:11.066678  901115 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-441323] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 08:34:11.167863  901115 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 08:34:11.242887  901115 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 08:34:11.388467  901115 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 08:34:11.388602  901115 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 08:34:11.472085  901115 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 08:34:11.621733  901115 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 08:34:11.650118  901115 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 08:34:11.858013  901115 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 08:34:12.025062  901115 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 08:34:12.025578  901115 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 08:34:12.031983  901115 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 08:34:12.034787  901115 out.go:252]   - Booting up control plane ...
	I1217 08:34:12.034921  901115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 08:34:12.035032  901115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 08:34:12.035109  901115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 08:34:12.053701  901115 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 08:34:12.053874  901115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 08:34:12.061199  901115 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 08:34:12.061396  901115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 08:34:12.061467  901115 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 08:34:12.160091  901115 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 08:34:12.160237  901115 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 08:34:12.661926  901115 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.8228ms
	I1217 08:34:12.666583  901115 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 08:34:12.666727  901115 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1217 08:34:12.666872  901115 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 08:34:12.666948  901115 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 08:34:13.672038  901115 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005468722s
	I1217 08:34:14.481559  901115 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.814984426s
	I1217 08:34:16.168904  901115 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502284655s
	I1217 08:34:16.186450  901115 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 08:34:16.198199  901115 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 08:34:16.209895  901115 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 08:34:16.210178  901115 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-441323 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 08:34:16.220268  901115 kubeadm.go:319] [bootstrap-token] Using token: 9ej1lh.70xuxr5pnsrao1sw
	I1217 08:34:16.221916  901115 out.go:252]   - Configuring RBAC rules ...
	I1217 08:34:16.222074  901115 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 08:34:16.227780  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 08:34:16.235400  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 08:34:16.238948  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 08:34:16.243237  901115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 08:34:16.246753  901115 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 08:34:16.576527  901115 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 08:34:16.994999  901115 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 08:34:17.577565  901115 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 08:34:17.578524  901115 kubeadm.go:319] 
	I1217 08:34:17.578648  901115 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 08:34:17.578658  901115 kubeadm.go:319] 
	I1217 08:34:17.578769  901115 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 08:34:17.578784  901115 kubeadm.go:319] 
	I1217 08:34:17.578806  901115 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 08:34:17.578878  901115 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 08:34:17.578965  901115 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 08:34:17.578981  901115 kubeadm.go:319] 
	I1217 08:34:17.579049  901115 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 08:34:17.579065  901115 kubeadm.go:319] 
	I1217 08:34:17.579128  901115 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 08:34:17.579136  901115 kubeadm.go:319] 
	I1217 08:34:17.579221  901115 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 08:34:17.579317  901115 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 08:34:17.579402  901115 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 08:34:17.579412  901115 kubeadm.go:319] 
	I1217 08:34:17.579522  901115 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 08:34:17.579646  901115 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 08:34:17.579657  901115 kubeadm.go:319] 
	I1217 08:34:17.579757  901115 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9ej1lh.70xuxr5pnsrao1sw \
	I1217 08:34:17.579888  901115 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 \
	I1217 08:34:17.579922  901115 kubeadm.go:319] 	--control-plane 
	I1217 08:34:17.579931  901115 kubeadm.go:319] 
	I1217 08:34:17.580054  901115 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 08:34:17.580062  901115 kubeadm.go:319] 
	I1217 08:34:17.580155  901115 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9ej1lh.70xuxr5pnsrao1sw \
	I1217 08:34:17.580304  901115 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:45669db99ae50cd10ea0f5d03393414122f6bd18fd42da373377b5dbf2c0cae6 
	I1217 08:34:17.582979  901115 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 08:34:17.583110  901115 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 08:34:17.583140  901115 cni.go:84] Creating CNI manager for ""
	I1217 08:34:17.583153  901115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:34:17.585317  901115 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 08:34:17.586963  901115 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 08:34:17.591670  901115 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1217 08:34:17.591691  901115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 08:34:17.606505  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 08:34:17.830969  901115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:34:17.831056  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:17.831105  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-441323 minikube.k8s.io/updated_at=2025_12_17T08_34_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=newest-cni-441323 minikube.k8s.io/primary=true
	I1217 08:34:17.930527  901115 ops.go:34] apiserver oom_adj: -16
	I1217 08:34:17.930575  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:18.431515  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:18.930659  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:19.431424  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:19.931089  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:20.430688  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:20.931207  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:21.431505  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:21.931469  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:22.430735  901115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:34:22.521343  901115 kubeadm.go:1114] duration metric: took 4.690342562s to wait for elevateKubeSystemPrivileges
	I1217 08:34:22.521381  901115 kubeadm.go:403] duration metric: took 12.126534838s to StartCluster
	I1217 08:34:22.521406  901115 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:22.521485  901115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:34:22.522808  901115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:22.523110  901115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:34:22.523123  901115 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:34:22.523200  901115 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:34:22.523318  901115 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-441323"
	I1217 08:34:22.523329  901115 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:22.523335  901115 addons.go:70] Setting default-storageclass=true in profile "newest-cni-441323"
	I1217 08:34:22.523353  901115 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-441323"
	I1217 08:34:22.523373  901115 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-441323"
	I1217 08:34:22.523386  901115 host.go:66] Checking if "newest-cni-441323" exists ...
	I1217 08:34:22.523822  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:22.523984  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:22.525674  901115 out.go:179] * Verifying Kubernetes components...
	I1217 08:34:22.527067  901115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:22.547203  901115 addons.go:239] Setting addon default-storageclass=true in "newest-cni-441323"
	I1217 08:34:22.547244  901115 host.go:66] Checking if "newest-cni-441323" exists ...
	I1217 08:34:22.547678  901115 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:22.551593  901115 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:34:22.553716  901115 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:34:22.553750  901115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:34:22.553821  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:22.587926  901115 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:34:22.588023  901115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:34:22.588521  901115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:22.591602  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:22.618514  901115 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:22.635269  901115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:34:22.686673  901115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:34:22.715208  901115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:34:22.750261  901115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:34:22.854117  901115 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 08:34:22.855257  901115 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:34:22.855321  901115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:34:23.068950  901115 api_server.go:72] duration metric: took 545.783895ms to wait for apiserver process to appear ...
	I1217 08:34:23.068996  901115 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:34:23.069080  901115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:23.076107  901115 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 08:34:23.077910  901115 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 08:34:23.077948  901115 api_server.go:131] duration metric: took 8.942399ms to wait for apiserver health ...
	I1217 08:34:23.077961  901115 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:34:23.081299  901115 system_pods.go:59] 8 kube-system pods found
	I1217 08:34:23.081337  901115 system_pods.go:61] "coredns-7d764666f9-mbqs4" [3b7c3c61-8c2e-48ea-92b5-1af40280abb5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 08:34:23.081349  901115 system_pods.go:61] "etcd-newest-cni-441323" [6a0673c4-7e59-496d-90bf-fb6a7588302a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:34:23.081360  901115 system_pods.go:61] "kindnet-5mpr4" [1249690d-960e-4091-9a1a-0eebd4e957c6] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:34:23.081367  901115 system_pods.go:61] "kube-apiserver-newest-cni-441323" [476ab60a-16ad-45d2-8fa6-ac1163efeb38] Running
	I1217 08:34:23.081374  901115 system_pods.go:61] "kube-controller-manager-newest-cni-441323" [fc64d625-e270-4987-a8d2-0daa3bb0e059] Running
	I1217 08:34:23.081381  901115 system_pods.go:61] "kube-proxy-pp5v6" [92bd18af-4b69-46fc-8dbb-d0fe791260b4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:34:23.081396  901115 system_pods.go:61] "kube-scheduler-newest-cni-441323" [32205d98-c144-4d9f-98c4-aba22a024602] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:34:23.081402  901115 system_pods.go:61] "storage-provisioner" [f0eed8b6-90b3-4a5f-8f84-dd9ed3415dd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 08:34:23.081415  901115 system_pods.go:74] duration metric: took 3.446174ms to wait for pod list to return data ...
	I1217 08:34:23.081426  901115 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:34:23.082202  901115 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 08:34:23.083777  901115 addons.go:530] duration metric: took 560.57768ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 08:34:23.084172  901115 default_sa.go:45] found service account: "default"
	I1217 08:34:23.084190  901115 default_sa.go:55] duration metric: took 2.758225ms for default service account to be created ...
	I1217 08:34:23.084202  901115 kubeadm.go:587] duration metric: took 561.044703ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 08:34:23.084221  901115 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:34:23.099903  901115 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:34:23.099932  901115 node_conditions.go:123] node cpu capacity is 8
	I1217 08:34:23.099947  901115 node_conditions.go:105] duration metric: took 15.720462ms to run NodePressure ...
	I1217 08:34:23.099960  901115 start.go:242] waiting for startup goroutines ...
	I1217 08:34:23.359262  901115 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-441323" context rescaled to 1 replicas
	I1217 08:34:23.359309  901115 start.go:247] waiting for cluster config update ...
	I1217 08:34:23.359324  901115 start.go:256] writing updated cluster config ...
	I1217 08:34:23.359678  901115 ssh_runner.go:195] Run: rm -f paused
	I1217 08:34:23.408348  901115 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 08:34:23.410093  901115 out.go:179] * Done! kubectl is now configured to use "newest-cni-441323" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 08:34:12 newest-cni-441323 crio[775]: time="2025-12-17T08:34:12.988920222Z" level=info msg="Started container" PID=1225 containerID=20de30af2ae49860953d9e18cf3cac159d96ce3e867c0bc33b4e778a30c426ed description=kube-system/kube-controller-manager-newest-cni-441323/kube-controller-manager id=eb46cffb-c9c7-4937-aa5d-72f5ec69a111 name=/runtime.v1.RuntimeService/StartContainer sandboxID=679bc037c894be6c8546681945affc535544ef310660844a1e83da0af28deb2c
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.79047634Z" level=info msg="Running pod sandbox: kube-system/kindnet-5mpr4/POD" id=ba810ddf-8c79-45ca-9cc2-e98efe4eb849 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.790590693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.792470681Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-pp5v6/POD" id=c02b29a4-50e1-4c29-87f8-cfbf597b705f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.792635235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.796884529Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c02b29a4-50e1-4c29-87f8-cfbf597b705f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.797381634Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ba810ddf-8c79-45ca-9cc2-e98efe4eb849 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.798678071Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.799296686Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.799715564Z" level=info msg="Ran pod sandbox 9c296643319371bf3a1a90b0562607abc4e1333c89ef7c8267e4eee75bbfe344 with infra container: kube-system/kube-proxy-pp5v6/POD" id=c02b29a4-50e1-4c29-87f8-cfbf597b705f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.800183213Z" level=info msg="Ran pod sandbox 501c520ea21403e7ed21cd3aee463f25dca6afe48e3801c892d7ec8c1d097fea with infra container: kube-system/kindnet-5mpr4/POD" id=ba810ddf-8c79-45ca-9cc2-e98efe4eb849 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.801929864Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=6a99fa22-cdd3-4649-9243-df1089e3a100 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.802230904Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=ec307bd9-17e4-47e2-a9dc-1ca2e248b24d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.802494349Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=ec307bd9-17e4-47e2-a9dc-1ca2e248b24d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.802626213Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=ec307bd9-17e4-47e2-a9dc-1ca2e248b24d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.803952599Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=e272d1c2-a5f0-4c4c-9fbd-aa98cf180df5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.803965215Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=2ce9d72f-96c3-4c5a-8bc7-70a76fc66208 name=/runtime.v1.ImageService/PullImage
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.809793492Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.811146448Z" level=info msg="Creating container: kube-system/kube-proxy-pp5v6/kube-proxy" id=24f3f0b6-dbc2-4adc-bb09-ff3dafba8fb0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.812071155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.81848302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.81911734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.864224028Z" level=info msg="Created container feade566b97ffc95c640575b4773ce6338946ec307110f01633d3312c527f93d: kube-system/kube-proxy-pp5v6/kube-proxy" id=24f3f0b6-dbc2-4adc-bb09-ff3dafba8fb0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.865442592Z" level=info msg="Starting container: feade566b97ffc95c640575b4773ce6338946ec307110f01633d3312c527f93d" id=3956d3a5-681e-4fcf-8846-9fbd81e1084d name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:34:22 newest-cni-441323 crio[775]: time="2025-12-17T08:34:22.871042225Z" level=info msg="Started container" PID=1588 containerID=feade566b97ffc95c640575b4773ce6338946ec307110f01633d3312c527f93d description=kube-system/kube-proxy-pp5v6/kube-proxy id=3956d3a5-681e-4fcf-8846-9fbd81e1084d name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c296643319371bf3a1a90b0562607abc4e1333c89ef7c8267e4eee75bbfe344
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	feade566b97ff       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   1 second ago        Running             kube-proxy                0                   9c29664331937       kube-proxy-pp5v6                            kube-system
	20de30af2ae49       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   11 seconds ago      Running             kube-controller-manager   0                   679bc037c894b       kube-controller-manager-newest-cni-441323   kube-system
	ddfa147f508aa       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   11 seconds ago      Running             etcd                      0                   07622922e08dd       etcd-newest-cni-441323                      kube-system
	e70f63c1bb84e       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   11 seconds ago      Running             kube-apiserver            0                   e25264e56a1e8       kube-apiserver-newest-cni-441323            kube-system
	ad19a8c423e65       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   11 seconds ago      Running             kube-scheduler            0                   5c86cbc4b5b95       kube-scheduler-newest-cni-441323            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-441323
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-441323
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=newest-cni-441323
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_34_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:34:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-441323
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:34:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:34:16 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:34:16 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:34:16 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 08:34:16 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-441323
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                6a252122-c552-42fb-8ce7-584cc3dce1f6
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-441323                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-5mpr4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-441323             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-441323    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-pp5v6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-441323             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-441323 event: Registered Node newest-cni-441323 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [ddfa147f508aa319f01e65852eab3d7058592d64a5f4c4019cf20eb26d6577c2] <==
	{"level":"info","ts":"2025-12-17T08:34:13.029107Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T08:34:13.519866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-17T08:34:13.519935Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-17T08:34:13.519988Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-17T08:34:13.519998Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:34:13.520016Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-17T08:34:13.520917Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-17T08:34:13.521021Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:34:13.521044Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-17T08:34:13.521054Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-17T08:34:13.522006Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T08:34:13.522871Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:34:13.522874Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-441323 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T08:34:13.522919Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:34:13.523225Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T08:34:13.523325Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T08:34:13.523231Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:34:13.523379Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T08:34:13.523363Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T08:34:13.523383Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-17T08:34:13.523475Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-17T08:34:13.524256Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:34:13.524389Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:34:13.527810Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-17T08:34:13.527855Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:34:25 up  2:16,  0 user,  load average: 4.33, 4.11, 2.92
	Linux newest-cni-441323 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [e70f63c1bb84e0ff81a62e4af35ece300918afd29db28bb85997a19ab1b99c53] <==
	I1217 08:34:14.525373       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 08:34:14.529900       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:34:14.537587       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:34:14.538726       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1217 08:34:14.545114       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:34:14.545303       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1217 08:34:14.549051       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:34:14.719769       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:34:15.431258       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1217 08:34:15.435753       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1217 08:34:15.435772       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 08:34:16.050496       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:34:16.107604       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:34:16.234579       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 08:34:16.242166       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1217 08:34:16.243512       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:34:16.248347       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:34:16.454413       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:34:16.981421       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:34:16.993980       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 08:34:17.003403       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 08:34:22.209069       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:34:22.214486       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 08:34:22.406983       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:34:22.457093       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [20de30af2ae49860953d9e18cf3cac159d96ce3e867c0bc33b4e778a30c426ed] <==
	I1217 08:34:21.258113       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.258121       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.258200       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.258252       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.257632       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.258445       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.258510       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.258559       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.258585       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.258508       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 08:34:21.258817       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-441323"
	I1217 08:34:21.258910       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 08:34:21.258935       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.258995       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.259057       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.259072       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.260024       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.260040       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.263226       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.266087       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:34:21.274784       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-441323" podCIDRs=["10.42.0.0/24"]
	I1217 08:34:21.356957       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:21.356981       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 08:34:21.356986       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 08:34:21.367196       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [feade566b97ffc95c640575b4773ce6338946ec307110f01633d3312c527f93d] <==
	I1217 08:34:22.923946       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:34:22.990451       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:34:23.091417       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:23.091453       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 08:34:23.091580       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:34:23.111264       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:34:23.111340       1 server_linux.go:136] "Using iptables Proxier"
	I1217 08:34:23.116906       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:34:23.117280       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 08:34:23.117297       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:34:23.118693       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:34:23.118720       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:34:23.118765       1 config.go:200] "Starting service config controller"
	I1217 08:34:23.118771       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:34:23.118802       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:34:23.118814       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:34:23.118906       1 config.go:309] "Starting node config controller"
	I1217 08:34:23.118922       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:34:23.118934       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:34:23.218979       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:34:23.219008       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:34:23.218987       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ad19a8c423e65bf8df76b1ba38463b40049244636cd7e3ed31c4ae90838f0a1d] <==
	E1217 08:34:14.482276       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 08:34:14.482356       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 08:34:14.482588       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 08:34:14.482671       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 08:34:14.482767       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 08:34:14.482803       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 08:34:14.482908       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1217 08:34:14.483172       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 08:34:14.483422       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 08:34:14.483526       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1217 08:34:14.483835       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 08:34:14.483836       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 08:34:15.300782       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 08:34:15.334420       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 08:34:15.345448       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 08:34:15.475731       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 08:34:15.527233       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 08:34:15.604259       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 08:34:15.653352       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 08:34:15.789640       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 08:34:15.789686       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1217 08:34:15.792251       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 08:34:15.826073       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 08:34:15.864207       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1217 08:34:17.674866       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 08:34:17 newest-cni-441323 kubelet[1303]: E1217 08:34:17.872728    1303 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-441323\" already exists" pod="kube-system/kube-scheduler-newest-cni-441323"
	Dec 17 08:34:17 newest-cni-441323 kubelet[1303]: E1217 08:34:17.872796    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-441323" containerName="kube-scheduler"
	Dec 17 08:34:18 newest-cni-441323 kubelet[1303]: I1217 08:34:18.005787    1303 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-441323" podStartSLOduration=3.005771174 podStartE2EDuration="3.005771174s" podCreationTimestamp="2025-12-17 08:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:34:17.992358414 +0000 UTC m=+1.256211111" watchObservedRunningTime="2025-12-17 08:34:18.005771174 +0000 UTC m=+1.269623870"
	Dec 17 08:34:18 newest-cni-441323 kubelet[1303]: I1217 08:34:18.014751    1303 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-441323" podStartSLOduration=2.014732659 podStartE2EDuration="2.014732659s" podCreationTimestamp="2025-12-17 08:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:34:18.006079774 +0000 UTC m=+1.269932469" watchObservedRunningTime="2025-12-17 08:34:18.014732659 +0000 UTC m=+1.278585356"
	Dec 17 08:34:18 newest-cni-441323 kubelet[1303]: I1217 08:34:18.024832    1303 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-441323" podStartSLOduration=2.02481111 podStartE2EDuration="2.02481111s" podCreationTimestamp="2025-12-17 08:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:34:18.015003028 +0000 UTC m=+1.278855724" watchObservedRunningTime="2025-12-17 08:34:18.02481111 +0000 UTC m=+1.288663807"
	Dec 17 08:34:18 newest-cni-441323 kubelet[1303]: E1217 08:34:18.859513    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-441323" containerName="kube-controller-manager"
	Dec 17 08:34:18 newest-cni-441323 kubelet[1303]: E1217 08:34:18.859621    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-441323" containerName="kube-scheduler"
	Dec 17 08:34:18 newest-cni-441323 kubelet[1303]: E1217 08:34:18.859781    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-441323" containerName="kube-apiserver"
	Dec 17 08:34:18 newest-cni-441323 kubelet[1303]: E1217 08:34:18.859852    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-441323" containerName="etcd"
	Dec 17 08:34:18 newest-cni-441323 kubelet[1303]: I1217 08:34:18.872591    1303 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-441323" podStartSLOduration=2.872570757 podStartE2EDuration="2.872570757s" podCreationTimestamp="2025-12-17 08:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:34:18.024759887 +0000 UTC m=+1.288612583" watchObservedRunningTime="2025-12-17 08:34:18.872570757 +0000 UTC m=+2.136423453"
	Dec 17 08:34:19 newest-cni-441323 kubelet[1303]: E1217 08:34:19.862304    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-441323" containerName="etcd"
	Dec 17 08:34:19 newest-cni-441323 kubelet[1303]: E1217 08:34:19.869181    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-441323" containerName="kube-scheduler"
	Dec 17 08:34:20 newest-cni-441323 kubelet[1303]: E1217 08:34:20.863455    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-441323" containerName="kube-scheduler"
	Dec 17 08:34:21 newest-cni-441323 kubelet[1303]: I1217 08:34:21.375920    1303 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 17 08:34:21 newest-cni-441323 kubelet[1303]: I1217 08:34:21.376706    1303 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 17 08:34:22 newest-cni-441323 kubelet[1303]: I1217 08:34:22.557859    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66hf8\" (UniqueName: \"kubernetes.io/projected/1249690d-960e-4091-9a1a-0eebd4e957c6-kube-api-access-66hf8\") pod \"kindnet-5mpr4\" (UID: \"1249690d-960e-4091-9a1a-0eebd4e957c6\") " pod="kube-system/kindnet-5mpr4"
	Dec 17 08:34:22 newest-cni-441323 kubelet[1303]: I1217 08:34:22.557919    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92bd18af-4b69-46fc-8dbb-d0fe791260b4-xtables-lock\") pod \"kube-proxy-pp5v6\" (UID: \"92bd18af-4b69-46fc-8dbb-d0fe791260b4\") " pod="kube-system/kube-proxy-pp5v6"
	Dec 17 08:34:22 newest-cni-441323 kubelet[1303]: I1217 08:34:22.557954    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wttz\" (UniqueName: \"kubernetes.io/projected/92bd18af-4b69-46fc-8dbb-d0fe791260b4-kube-api-access-5wttz\") pod \"kube-proxy-pp5v6\" (UID: \"92bd18af-4b69-46fc-8dbb-d0fe791260b4\") " pod="kube-system/kube-proxy-pp5v6"
	Dec 17 08:34:22 newest-cni-441323 kubelet[1303]: I1217 08:34:22.557983    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1249690d-960e-4091-9a1a-0eebd4e957c6-cni-cfg\") pod \"kindnet-5mpr4\" (UID: \"1249690d-960e-4091-9a1a-0eebd4e957c6\") " pod="kube-system/kindnet-5mpr4"
	Dec 17 08:34:22 newest-cni-441323 kubelet[1303]: I1217 08:34:22.558003    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1249690d-960e-4091-9a1a-0eebd4e957c6-xtables-lock\") pod \"kindnet-5mpr4\" (UID: \"1249690d-960e-4091-9a1a-0eebd4e957c6\") " pod="kube-system/kindnet-5mpr4"
	Dec 17 08:34:22 newest-cni-441323 kubelet[1303]: I1217 08:34:22.558025    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1249690d-960e-4091-9a1a-0eebd4e957c6-lib-modules\") pod \"kindnet-5mpr4\" (UID: \"1249690d-960e-4091-9a1a-0eebd4e957c6\") " pod="kube-system/kindnet-5mpr4"
	Dec 17 08:34:22 newest-cni-441323 kubelet[1303]: I1217 08:34:22.558044    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92bd18af-4b69-46fc-8dbb-d0fe791260b4-kube-proxy\") pod \"kube-proxy-pp5v6\" (UID: \"92bd18af-4b69-46fc-8dbb-d0fe791260b4\") " pod="kube-system/kube-proxy-pp5v6"
	Dec 17 08:34:22 newest-cni-441323 kubelet[1303]: I1217 08:34:22.558066    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92bd18af-4b69-46fc-8dbb-d0fe791260b4-lib-modules\") pod \"kube-proxy-pp5v6\" (UID: \"92bd18af-4b69-46fc-8dbb-d0fe791260b4\") " pod="kube-system/kube-proxy-pp5v6"
	Dec 17 08:34:23 newest-cni-441323 kubelet[1303]: E1217 08:34:23.612694    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-441323" containerName="kube-controller-manager"
	Dec 17 08:34:23 newest-cni-441323 kubelet[1303]: I1217 08:34:23.891927    1303 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-pp5v6" podStartSLOduration=1.891906377 podStartE2EDuration="1.891906377s" podCreationTimestamp="2025-12-17 08:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:34:23.891833232 +0000 UTC m=+7.155685930" watchObservedRunningTime="2025-12-17 08:34:23.891906377 +0000 UTC m=+7.155759075"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-441323 -n newest-cni-441323
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-441323 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-mbqs4 kindnet-5mpr4 storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-441323 describe pod coredns-7d764666f9-mbqs4 kindnet-5mpr4 storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-441323 describe pod coredns-7d764666f9-mbqs4 kindnet-5mpr4 storage-provisioner: exit status 1 (68.372652ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-mbqs4" not found
	Error from server (NotFound): pods "kindnet-5mpr4" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-441323 describe pod coredns-7d764666f9-mbqs4 kindnet-5mpr4 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-441323 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-441323 --alsologtostderr -v=1: exit status 80 (1.88527991s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-441323 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:34:54.340940  911568 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:34:54.341212  911568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:34:54.341222  911568 out.go:374] Setting ErrFile to fd 2...
	I1217 08:34:54.341226  911568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:34:54.341403  911568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:34:54.341681  911568 out.go:368] Setting JSON to false
	I1217 08:34:54.341708  911568 mustload.go:66] Loading cluster: newest-cni-441323
	I1217 08:34:54.342089  911568 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:54.342453  911568 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:54.361901  911568 host.go:66] Checking if "newest-cni-441323" exists ...
	I1217 08:34:54.362323  911568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:34:54.423023  911568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-17 08:34:54.413143309 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:34:54.423694  911568 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-441323 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 08:34:54.426139  911568 out.go:179] * Pausing node newest-cni-441323 ... 
	I1217 08:34:54.427646  911568 host.go:66] Checking if "newest-cni-441323" exists ...
	I1217 08:34:54.427930  911568 ssh_runner.go:195] Run: systemctl --version
	I1217 08:34:54.427973  911568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:54.448261  911568 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:54.542334  911568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:34:54.555034  911568 pause.go:52] kubelet running: true
	I1217 08:34:54.555093  911568 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:34:54.693582  911568 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:34:54.693666  911568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:54.762244  911568 cri.go:89] found id: "d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae"
	I1217 08:34:54.762267  911568 cri.go:89] found id: "aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f"
	I1217 08:34:54.762273  911568 cri.go:89] found id: "7ed1c6caaa6011a6675bd69cbe62ead112bdce2566fe74d2da6abf7436c8d59f"
	I1217 08:34:54.762277  911568 cri.go:89] found id: "0dfcf1c84ee6b4ff17cb72331eb27697671d6674fcbae816b545365c03020bab"
	I1217 08:34:54.762282  911568 cri.go:89] found id: "a6408d47d2e421bffc0051deaf838418f66ed876c98037484db118e1436e4d90"
	I1217 08:34:54.762286  911568 cri.go:89] found id: "140066cadb701c9beca42376dbfaa93583dcfb70edbc22346ef728a1d76c46c6"
	I1217 08:34:54.762291  911568 cri.go:89] found id: ""
	I1217 08:34:54.762337  911568 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:54.774489  911568 retry.go:31] will retry after 130.282659ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:54Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:34:54.905967  911568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:34:54.919202  911568 pause.go:52] kubelet running: false
	I1217 08:34:54.919272  911568 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:34:55.037014  911568 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:34:55.037089  911568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:55.107659  911568 cri.go:89] found id: "d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae"
	I1217 08:34:55.107684  911568 cri.go:89] found id: "aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f"
	I1217 08:34:55.107688  911568 cri.go:89] found id: "7ed1c6caaa6011a6675bd69cbe62ead112bdce2566fe74d2da6abf7436c8d59f"
	I1217 08:34:55.107692  911568 cri.go:89] found id: "0dfcf1c84ee6b4ff17cb72331eb27697671d6674fcbae816b545365c03020bab"
	I1217 08:34:55.107695  911568 cri.go:89] found id: "a6408d47d2e421bffc0051deaf838418f66ed876c98037484db118e1436e4d90"
	I1217 08:34:55.107699  911568 cri.go:89] found id: "140066cadb701c9beca42376dbfaa93583dcfb70edbc22346ef728a1d76c46c6"
	I1217 08:34:55.107702  911568 cri.go:89] found id: ""
	I1217 08:34:55.107741  911568 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:55.120087  911568 retry.go:31] will retry after 259.266125ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:55Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:34:55.379620  911568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:34:55.400335  911568 pause.go:52] kubelet running: false
	I1217 08:34:55.400416  911568 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:34:55.525997  911568 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:34:55.526078  911568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:55.596221  911568 cri.go:89] found id: "d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae"
	I1217 08:34:55.596250  911568 cri.go:89] found id: "aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f"
	I1217 08:34:55.596255  911568 cri.go:89] found id: "7ed1c6caaa6011a6675bd69cbe62ead112bdce2566fe74d2da6abf7436c8d59f"
	I1217 08:34:55.596259  911568 cri.go:89] found id: "0dfcf1c84ee6b4ff17cb72331eb27697671d6674fcbae816b545365c03020bab"
	I1217 08:34:55.596262  911568 cri.go:89] found id: "a6408d47d2e421bffc0051deaf838418f66ed876c98037484db118e1436e4d90"
	I1217 08:34:55.596265  911568 cri.go:89] found id: "140066cadb701c9beca42376dbfaa93583dcfb70edbc22346ef728a1d76c46c6"
	I1217 08:34:55.596268  911568 cri.go:89] found id: ""
	I1217 08:34:55.596307  911568 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:55.607984  911568 retry.go:31] will retry after 302.211167ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:55Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:34:55.910514  911568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:34:55.924601  911568 pause.go:52] kubelet running: false
	I1217 08:34:55.924663  911568 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 08:34:56.054171  911568 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 08:34:56.054258  911568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 08:34:56.129160  911568 cri.go:89] found id: "d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae"
	I1217 08:34:56.129184  911568 cri.go:89] found id: "aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f"
	I1217 08:34:56.129189  911568 cri.go:89] found id: "7ed1c6caaa6011a6675bd69cbe62ead112bdce2566fe74d2da6abf7436c8d59f"
	I1217 08:34:56.129192  911568 cri.go:89] found id: "0dfcf1c84ee6b4ff17cb72331eb27697671d6674fcbae816b545365c03020bab"
	I1217 08:34:56.129195  911568 cri.go:89] found id: "a6408d47d2e421bffc0051deaf838418f66ed876c98037484db118e1436e4d90"
	I1217 08:34:56.129199  911568 cri.go:89] found id: "140066cadb701c9beca42376dbfaa93583dcfb70edbc22346ef728a1d76c46c6"
	I1217 08:34:56.129203  911568 cri.go:89] found id: ""
	I1217 08:34:56.129253  911568 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 08:34:56.145287  911568 out.go:203] 
	W1217 08:34:56.146923  911568 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 08:34:56.146946  911568 out.go:285] * 
	* 
	W1217 08:34:56.152106  911568 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 08:34:56.153762  911568 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-441323 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-441323
helpers_test.go:244: (dbg) docker inspect newest-cni-441323:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4",
	        "Created": "2025-12-17T08:34:03.147489501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 909776,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:34:44.210348896Z",
	            "FinishedAt": "2025-12-17T08:34:43.338214669Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/hosts",
	        "LogPath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4-json.log",
	        "Name": "/newest-cni-441323",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-441323:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-441323",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4",
	                "LowerDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-441323",
	                "Source": "/var/lib/docker/volumes/newest-cni-441323/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-441323",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-441323",
	                "name.minikube.sigs.k8s.io": "newest-cni-441323",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f8d20a5888c12816e66eb1dac41b2fba0fbc976f064086f7cc40989546d9e374",
	            "SandboxKey": "/var/run/docker/netns/f8d20a5888c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33544"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33542"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33543"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-441323": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c3d2e6bc2ee52a92bd5fc401c42f5bb70038c4ef50a33f6f2359c529ab511ea2",
	                    "EndpointID": "34790b8903961489951cf69a5dfab3d65830004c6beecfe80ce573f9564352f0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ea:1c:cc:2a:cb:c2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-441323",
	                        "5e7ea243e76c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-441323 -n newest-cni-441323
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-441323 -n newest-cni-441323: exit status 2 (337.345618ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-441323 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:34 UTC │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ image   │ no-preload-936988 image list --format=json                                                                                                                                                                                                         │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p no-preload-936988 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p no-preload-936988                                                                                                                                                                                                                               │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ delete  │ -p no-preload-936988                                                                                                                                                                                                                               │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ default-k8s-diff-port-225657 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ pause   │ -p default-k8s-diff-port-225657 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-225657                                                                                                                                                                                                                    │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-441323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-225657                                                                                                                                                                                                                    │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ stop    │ -p newest-cni-441323 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-441323 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ start   │ -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ newest-cni-441323 image list --format=json                                                                                                                                                                                                         │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ pause   │ -p newest-cni-441323 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:34:43
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:34:43.974125  909575 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:34:43.974226  909575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:34:43.974231  909575 out.go:374] Setting ErrFile to fd 2...
	I1217 08:34:43.974234  909575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:34:43.974473  909575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:34:43.974950  909575 out.go:368] Setting JSON to false
	I1217 08:34:43.975994  909575 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8229,"bootTime":1765952255,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:34:43.976061  909575 start.go:143] virtualization: kvm guest
	I1217 08:34:43.978761  909575 out.go:179] * [newest-cni-441323] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:34:43.980658  909575 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:34:43.980717  909575 notify.go:221] Checking for updates...
	I1217 08:34:43.983768  909575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:34:43.985327  909575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:34:43.986980  909575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:34:43.988840  909575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:34:43.990672  909575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:34:43.992623  909575 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:43.993208  909575 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:34:44.017420  909575 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:34:44.017527  909575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:34:44.074200  909575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-17 08:34:44.063237382 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:34:44.074355  909575 docker.go:319] overlay module found
	I1217 08:34:44.076416  909575 out.go:179] * Using the docker driver based on existing profile
	I1217 08:34:44.077650  909575 start.go:309] selected driver: docker
	I1217 08:34:44.077668  909575 start.go:927] validating driver "docker" against &{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:34:44.077768  909575 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:34:44.078355  909575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:34:44.132994  909575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-17 08:34:44.123243604 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:34:44.133299  909575 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 08:34:44.133326  909575 cni.go:84] Creating CNI manager for ""
	I1217 08:34:44.133395  909575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:34:44.133428  909575 start.go:353] cluster config:
	{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:34:44.135596  909575 out.go:179] * Starting "newest-cni-441323" primary control-plane node in "newest-cni-441323" cluster
	I1217 08:34:44.136911  909575 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:34:44.138501  909575 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:34:44.139879  909575 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:34:44.139929  909575 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 08:34:44.139951  909575 cache.go:65] Caching tarball of preloaded images
	I1217 08:34:44.140000  909575 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:34:44.140075  909575 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:34:44.140091  909575 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 08:34:44.140221  909575 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:34:44.162024  909575 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:34:44.162047  909575 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:34:44.162064  909575 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:34:44.162096  909575 start.go:360] acquireMachinesLock for newest-cni-441323: {Name:mk9498dbb1eb77dbf697c7e17cff718c09574836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:34:44.162152  909575 start.go:364] duration metric: took 38.334µs to acquireMachinesLock for "newest-cni-441323"
	I1217 08:34:44.162176  909575 start.go:96] Skipping create...Using existing machine configuration
	I1217 08:34:44.162184  909575 fix.go:54] fixHost starting: 
	I1217 08:34:44.162385  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:44.180334  909575 fix.go:112] recreateIfNeeded on newest-cni-441323: state=Stopped err=<nil>
	W1217 08:34:44.180368  909575 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 08:34:44.182369  909575 out.go:252] * Restarting existing docker container for "newest-cni-441323" ...
	I1217 08:34:44.182436  909575 cli_runner.go:164] Run: docker start newest-cni-441323
	I1217 08:34:44.436751  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:44.456512  909575 kic.go:432] container "newest-cni-441323" state is running.
	I1217 08:34:44.456915  909575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:44.476616  909575 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:34:44.476926  909575 machine.go:94] provisionDockerMachine start ...
	I1217 08:34:44.477031  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:44.496801  909575 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:44.496940  909575 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1217 08:34:44.496955  909575 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:34:44.497749  909575 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59626->127.0.0.1:33540: read: connection reset by peer
	I1217 08:34:47.626911  909575 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-441323
	
	I1217 08:34:47.626947  909575 ubuntu.go:182] provisioning hostname "newest-cni-441323"
	I1217 08:34:47.627029  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:47.646911  909575 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:47.647030  909575 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1217 08:34:47.647044  909575 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-441323 && echo "newest-cni-441323" | sudo tee /etc/hostname
	I1217 08:34:47.785762  909575 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-441323
	
	I1217 08:34:47.785855  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:47.805254  909575 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:47.805353  909575 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1217 08:34:47.805387  909575 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-441323' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-441323/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-441323' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:34:47.932897  909575 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:34:47.932932  909575 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:34:47.932971  909575 ubuntu.go:190] setting up certificates
	I1217 08:34:47.932994  909575 provision.go:84] configureAuth start
	I1217 08:34:47.933069  909575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:47.951754  909575 provision.go:143] copyHostCerts
	I1217 08:34:47.951850  909575 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:34:47.951873  909575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:34:47.951962  909575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:34:47.952126  909575 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:34:47.952140  909575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:34:47.952185  909575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:34:47.952287  909575 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:34:47.952301  909575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:34:47.952348  909575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:34:47.952444  909575 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.newest-cni-441323 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-441323]
	I1217 08:34:48.000138  909575 provision.go:177] copyRemoteCerts
	I1217 08:34:48.000216  909575 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:34:48.000295  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.019140  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:48.115224  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:34:48.133979  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 08:34:48.151866  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 08:34:48.170810  909575 provision.go:87] duration metric: took 237.785329ms to configureAuth
	I1217 08:34:48.170839  909575 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:34:48.171016  909575 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:48.171115  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.190217  909575 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:48.190354  909575 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1217 08:34:48.190377  909575 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:34:48.475905  909575 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:34:48.475936  909575 machine.go:97] duration metric: took 3.998991797s to provisionDockerMachine
	I1217 08:34:48.475948  909575 start.go:293] postStartSetup for "newest-cni-441323" (driver="docker")
	I1217 08:34:48.475961  909575 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:34:48.476032  909575 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:34:48.476079  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.495988  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:48.590777  909575 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:34:48.594819  909575 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:34:48.594851  909575 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:34:48.594862  909575 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:34:48.594918  909575 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:34:48.594989  909575 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:34:48.595086  909575 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:34:48.603394  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:34:48.621570  909575 start.go:296] duration metric: took 145.603503ms for postStartSetup
	I1217 08:34:48.621684  909575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:34:48.621752  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.640688  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:48.731229  909575 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:34:48.736624  909575 fix.go:56] duration metric: took 4.574429564s for fixHost
	I1217 08:34:48.737642  909575 start.go:83] releasing machines lock for "newest-cni-441323", held for 4.5754524s
	I1217 08:34:48.738010  909575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:48.757035  909575 ssh_runner.go:195] Run: cat /version.json
	I1217 08:34:48.757091  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.757150  909575 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:34:48.757235  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.777351  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:48.777709  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:48.921987  909575 ssh_runner.go:195] Run: systemctl --version
	I1217 08:34:48.928932  909575 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:34:48.966185  909575 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:34:48.971360  909575 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:34:48.971433  909575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:34:48.981112  909575 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 08:34:48.981143  909575 start.go:496] detecting cgroup driver to use...
	I1217 08:34:48.981180  909575 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:34:48.981241  909575 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:34:48.996835  909575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:34:49.010627  909575 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:34:49.010699  909575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:34:49.026182  909575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:34:49.039313  909575 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:34:49.120242  909575 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:34:49.201888  909575 docker.go:234] disabling docker service ...
	I1217 08:34:49.201964  909575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:34:49.216921  909575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:34:49.230195  909575 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:34:49.314315  909575 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:34:49.392815  909575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:34:49.406135  909575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:34:49.421952  909575 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:34:49.422011  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.431936  909575 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:34:49.431998  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.441904  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.451499  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.461251  909575 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:34:49.470255  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.480021  909575 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.489685  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.499548  909575 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:34:49.508072  909575 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:34:49.516047  909575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:49.596394  909575 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:34:49.737657  909575 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:34:49.737749  909575 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:34:49.742160  909575 start.go:564] Will wait 60s for crictl version
	I1217 08:34:49.742226  909575 ssh_runner.go:195] Run: which crictl
	I1217 08:34:49.746104  909575 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:34:49.773351  909575 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:34:49.773433  909575 ssh_runner.go:195] Run: crio --version
	I1217 08:34:49.803051  909575 ssh_runner.go:195] Run: crio --version
	I1217 08:34:49.834977  909575 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 08:34:49.836555  909575 cli_runner.go:164] Run: docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:34:49.856401  909575 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 08:34:49.860870  909575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:34:49.873288  909575 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 08:34:49.874699  909575 kubeadm.go:884] updating cluster {Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:34:49.874862  909575 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:34:49.874925  909575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:34:49.909258  909575 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:34:49.909281  909575 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:34:49.909343  909575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:34:49.936302  909575 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:34:49.936328  909575 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:34:49.936336  909575 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 08:34:49.936449  909575 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-441323 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:34:49.936526  909575 ssh_runner.go:195] Run: crio config
	I1217 08:34:49.983906  909575 cni.go:84] Creating CNI manager for ""
	I1217 08:34:49.983930  909575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:34:49.983950  909575 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 08:34:49.983977  909575 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-441323 NodeName:newest-cni-441323 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:34:49.984118  909575 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-441323"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:34:49.984186  909575 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 08:34:49.992902  909575 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:34:49.992973  909575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:34:50.001410  909575 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 08:34:50.015127  909575 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 08:34:50.028399  909575 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1217 08:34:50.042078  909575 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:34:50.046352  909575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:34:50.057613  909575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:50.137246  909575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:34:50.162291  909575 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323 for IP: 192.168.76.2
	I1217 08:34:50.162322  909575 certs.go:195] generating shared ca certs ...
	I1217 08:34:50.162344  909575 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:50.162552  909575 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:34:50.162611  909575 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:34:50.162622  909575 certs.go:257] generating profile certs ...
	I1217 08:34:50.162705  909575 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key
	I1217 08:34:50.162778  909575 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41
	I1217 08:34:50.162814  909575 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key
	I1217 08:34:50.162915  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:34:50.162963  909575 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:34:50.162976  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:34:50.163005  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:34:50.163029  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:34:50.163053  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:34:50.163094  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:34:50.163714  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:34:50.183114  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:34:50.203501  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:34:50.223916  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:34:50.247994  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 08:34:50.268287  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 08:34:50.286992  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:34:50.305590  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:34:50.323843  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:34:50.341927  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:34:50.361686  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:34:50.380653  909575 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:34:50.393954  909575 ssh_runner.go:195] Run: openssl version
	I1217 08:34:50.400557  909575 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:34:50.409202  909575 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:34:50.417927  909575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:34:50.422196  909575 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:34:50.422258  909575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:34:50.457282  909575 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:34:50.465771  909575 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:50.473818  909575 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:34:50.481895  909575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:50.486087  909575 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:50.486173  909575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:50.520666  909575 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:34:50.528829  909575 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:34:50.537218  909575 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:34:50.546003  909575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:34:50.550242  909575 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:34:50.550301  909575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:34:50.585456  909575 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:34:50.594268  909575 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:34:50.599062  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 08:34:50.633651  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 08:34:50.669122  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 08:34:50.717007  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 08:34:50.759373  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 08:34:50.813476  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 08:34:50.860731  909575 kubeadm.go:401] StartCluster: {Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:34:50.860846  909575 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:34:50.860912  909575 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:34:50.893785  909575 cri.go:89] found id: "7ed1c6caaa6011a6675bd69cbe62ead112bdce2566fe74d2da6abf7436c8d59f"
	I1217 08:34:50.893812  909575 cri.go:89] found id: "0dfcf1c84ee6b4ff17cb72331eb27697671d6674fcbae816b545365c03020bab"
	I1217 08:34:50.893818  909575 cri.go:89] found id: "a6408d47d2e421bffc0051deaf838418f66ed876c98037484db118e1436e4d90"
	I1217 08:34:50.893842  909575 cri.go:89] found id: "140066cadb701c9beca42376dbfaa93583dcfb70edbc22346ef728a1d76c46c6"
	I1217 08:34:50.893847  909575 cri.go:89] found id: ""
	I1217 08:34:50.893893  909575 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 08:34:50.906570  909575 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:50Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:34:50.906654  909575 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:34:50.915626  909575 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 08:34:50.915682  909575 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 08:34:50.915730  909575 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 08:34:50.923627  909575 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:34:50.924113  909575 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-441323" does not appear in /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:34:50.924242  909575 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-552461/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-441323" cluster setting kubeconfig missing "newest-cni-441323" context setting]
	I1217 08:34:50.924543  909575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:50.925930  909575 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 08:34:50.934379  909575 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 08:34:50.934425  909575 kubeadm.go:602] duration metric: took 18.736248ms to restartPrimaryControlPlane
	I1217 08:34:50.934437  909575 kubeadm.go:403] duration metric: took 73.718996ms to StartCluster
	I1217 08:34:50.934464  909575 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:50.934572  909575 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:34:50.935329  909575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:50.935627  909575 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:34:50.935719  909575 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:34:50.935851  909575 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-441323"
	I1217 08:34:50.935876  909575 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-441323"
	W1217 08:34:50.935885  909575 addons.go:248] addon storage-provisioner should already be in state true
	I1217 08:34:50.935908  909575 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:50.935918  909575 host.go:66] Checking if "newest-cni-441323" exists ...
	I1217 08:34:50.935919  909575 addons.go:70] Setting dashboard=true in profile "newest-cni-441323"
	I1217 08:34:50.935934  909575 addons.go:239] Setting addon dashboard=true in "newest-cni-441323"
	W1217 08:34:50.935942  909575 addons.go:248] addon dashboard should already be in state true
	I1217 08:34:50.935959  909575 addons.go:70] Setting default-storageclass=true in profile "newest-cni-441323"
	I1217 08:34:50.935968  909575 host.go:66] Checking if "newest-cni-441323" exists ...
	I1217 08:34:50.935984  909575 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-441323"
	I1217 08:34:50.936276  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:50.936442  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:50.936447  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:50.941762  909575 out.go:179] * Verifying Kubernetes components...
	I1217 08:34:50.943672  909575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:50.963881  909575 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:34:50.965331  909575 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 08:34:50.965414  909575 addons.go:239] Setting addon default-storageclass=true in "newest-cni-441323"
	W1217 08:34:50.965433  909575 addons.go:248] addon default-storageclass should already be in state true
	I1217 08:34:50.965463  909575 host.go:66] Checking if "newest-cni-441323" exists ...
	I1217 08:34:50.965409  909575 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:34:50.965516  909575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:34:50.965583  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:50.966041  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:50.968042  909575 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 08:34:50.969375  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 08:34:50.969397  909575 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 08:34:50.969468  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:51.003274  909575 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:34:51.003302  909575 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:34:51.003362  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:51.011611  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:51.017075  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:51.029037  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:51.086155  909575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:34:51.100079  909575 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:34:51.100155  909575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:34:51.112216  909575 api_server.go:72] duration metric: took 176.543858ms to wait for apiserver process to appear ...
	I1217 08:34:51.112246  909575 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:34:51.112271  909575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:51.123228  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 08:34:51.123251  909575 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 08:34:51.130424  909575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:34:51.135995  909575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:34:51.137678  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 08:34:51.137705  909575 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 08:34:51.153086  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 08:34:51.153120  909575 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 08:34:51.169179  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 08:34:51.169208  909575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 08:34:51.183329  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 08:34:51.183362  909575 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 08:34:51.197449  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 08:34:51.197486  909575 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 08:34:51.210488  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 08:34:51.210514  909575 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 08:34:51.223287  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 08:34:51.223310  909575 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 08:34:51.236457  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 08:34:51.236484  909575 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 08:34:51.249908  909575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 08:34:52.224625  909575 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 08:34:52.224666  909575 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 08:34:52.224685  909575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:52.283765  909575 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 08:34:52.283802  909575 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 08:34:52.612812  909575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:52.617385  909575 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:34:52.617426  909575 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:34:52.848569  909575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.718085858s)
	I1217 08:34:52.848626  909575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.712592405s)
	I1217 08:34:52.848735  909575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.598782878s)
	I1217 08:34:52.850819  909575 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-441323 addons enable metrics-server
	
	I1217 08:34:52.860764  909575 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 08:34:52.862706  909575 addons.go:530] duration metric: took 1.92699964s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 08:34:53.112926  909575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:53.117200  909575 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:34:53.117228  909575 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:34:53.612716  909575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:53.617688  909575 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 08:34:53.618930  909575 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 08:34:53.618962  909575 api_server.go:131] duration metric: took 2.506707504s to wait for apiserver health ...
	I1217 08:34:53.618973  909575 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:34:53.622879  909575 system_pods.go:59] 8 kube-system pods found
	I1217 08:34:53.622925  909575 system_pods.go:61] "coredns-7d764666f9-mbqs4" [3b7c3c61-8c2e-48ea-92b5-1af40280abb5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 08:34:53.622941  909575 system_pods.go:61] "etcd-newest-cni-441323" [6a0673c4-7e59-496d-90bf-fb6a7588302a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:34:53.622956  909575 system_pods.go:61] "kindnet-5mpr4" [1249690d-960e-4091-9a1a-0eebd4e957c6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:34:53.622967  909575 system_pods.go:61] "kube-apiserver-newest-cni-441323" [476ab60a-16ad-45d2-8fa6-ac1163efeb38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:34:53.622983  909575 system_pods.go:61] "kube-controller-manager-newest-cni-441323" [fc64d625-e270-4987-a8d2-0daa3bb0e059] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:34:53.622995  909575 system_pods.go:61] "kube-proxy-pp5v6" [92bd18af-4b69-46fc-8dbb-d0fe791260b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:34:53.623003  909575 system_pods.go:61] "kube-scheduler-newest-cni-441323" [32205d98-c144-4d9f-98c4-aba22a024602] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:34:53.623013  909575 system_pods.go:61] "storage-provisioner" [f0eed8b6-90b3-4a5f-8f84-dd9ed3415dd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 08:34:53.623022  909575 system_pods.go:74] duration metric: took 4.041638ms to wait for pod list to return data ...
	I1217 08:34:53.623035  909575 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:34:53.625728  909575 default_sa.go:45] found service account: "default"
	I1217 08:34:53.625752  909575 default_sa.go:55] duration metric: took 2.709295ms for default service account to be created ...
	I1217 08:34:53.625766  909575 kubeadm.go:587] duration metric: took 2.690104822s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 08:34:53.625785  909575 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:34:53.628443  909575 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:34:53.628475  909575 node_conditions.go:123] node cpu capacity is 8
	I1217 08:34:53.628493  909575 node_conditions.go:105] duration metric: took 2.70297ms to run NodePressure ...
	I1217 08:34:53.628510  909575 start.go:242] waiting for startup goroutines ...
	I1217 08:34:53.628524  909575 start.go:247] waiting for cluster config update ...
	I1217 08:34:53.628572  909575 start.go:256] writing updated cluster config ...
	I1217 08:34:53.628931  909575 ssh_runner.go:195] Run: rm -f paused
	I1217 08:34:53.680173  909575 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 08:34:53.682573  909575 out.go:179] * Done! kubectl is now configured to use "newest-cni-441323" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.54008623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.541005955Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fa16709e-3ede-4428-afe2-ff9823db7144 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.542899744Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.543360922Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=81a51413-9da6-41ce-8c73-fc274f57c88b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.543645251Z" level=info msg="Ran pod sandbox 426a8ee3e6ebe315fd2c64df43dca2244b79f9ed177cae0ee03e83f04c24f926 with infra container: kube-system/kindnet-5mpr4/POD" id=fa16709e-3ede-4428-afe2-ff9823db7144 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.545034359Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=193e90aa-4b84-4902-9588-123c64973490 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.545098302Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.545955354Z" level=info msg="Ran pod sandbox 846b70a399e711487066105d3814548965f4709258b6c4862f9cae9fbcf7f2cc with infra container: kube-system/kube-proxy-pp5v6/POD" id=81a51413-9da6-41ce-8c73-fc274f57c88b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.546030933Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=2a63afe3-9d2b-42a7-84d1-bcac8129353c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.547025863Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ba478203-5bb7-4059-ac95-28f58b14a764 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.547260218Z" level=info msg="Creating container: kube-system/kindnet-5mpr4/kindnet-cni" id=fb3d3201-79e7-4040-b50c-7d6548233fa7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.547367496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.548388603Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=88482c8a-a053-4b6e-bc7e-c65f860b5c43 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.552301175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.552915713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.552982566Z" level=info msg="Creating container: kube-system/kube-proxy-pp5v6/kube-proxy" id=f27f9f85-7f85-4ad5-9dc3-9c01f820cb05 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.553108216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.557896823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.558620505Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.583375069Z" level=info msg="Created container aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f: kube-system/kindnet-5mpr4/kindnet-cni" id=fb3d3201-79e7-4040-b50c-7d6548233fa7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.584180327Z" level=info msg="Starting container: aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f" id=341f8e3c-abb5-45ab-9b16-6914db08527e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.586397448Z" level=info msg="Started container" PID=1052 containerID=aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f description=kube-system/kindnet-5mpr4/kindnet-cni id=341f8e3c-abb5-45ab-9b16-6914db08527e name=/runtime.v1.RuntimeService/StartContainer sandboxID=426a8ee3e6ebe315fd2c64df43dca2244b79f9ed177cae0ee03e83f04c24f926
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.588014548Z" level=info msg="Created container d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae: kube-system/kube-proxy-pp5v6/kube-proxy" id=f27f9f85-7f85-4ad5-9dc3-9c01f820cb05 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.58945754Z" level=info msg="Starting container: d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae" id=7a27f06d-f5da-4e8b-8c12-b0a9a40d964d name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.59217454Z" level=info msg="Started container" PID=1053 containerID=d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae description=kube-system/kube-proxy-pp5v6/kube-proxy id=7a27f06d-f5da-4e8b-8c12-b0a9a40d964d name=/runtime.v1.RuntimeService/StartContainer sandboxID=846b70a399e711487066105d3814548965f4709258b6c4862f9cae9fbcf7f2cc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d304fcf952831       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   3 seconds ago       Running             kube-proxy                1                   846b70a399e71       kube-proxy-pp5v6                            kube-system
	aaf8dff0b9a3f       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   3 seconds ago       Running             kindnet-cni               1                   426a8ee3e6ebe       kindnet-5mpr4                               kube-system
	7ed1c6caaa601       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   6 seconds ago       Running             etcd                      1                   fff7850f84de1       etcd-newest-cni-441323                      kube-system
	0dfcf1c84ee6b       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   6 seconds ago       Running             kube-controller-manager   1                   6498408ada328       kube-controller-manager-newest-cni-441323   kube-system
	a6408d47d2e42       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   6 seconds ago       Running             kube-apiserver            1                   beb7f18493c11       kube-apiserver-newest-cni-441323            kube-system
	140066cadb701       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   6 seconds ago       Running             kube-scheduler            1                   05e2f8c33163e       kube-scheduler-newest-cni-441323            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-441323
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-441323
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=newest-cni-441323
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_34_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:34:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-441323
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:34:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:34:52 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:34:52 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:34:52 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 08:34:52 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-441323
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                6a252122-c552-42fb-8ce7-584cc3dce1f6
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-441323                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-5mpr4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      35s
	  kube-system                 kube-apiserver-newest-cni-441323             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-newest-cni-441323    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-pp5v6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-scheduler-newest-cni-441323             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  36s   node-controller  Node newest-cni-441323 event: Registered Node newest-cni-441323 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-441323 event: Registered Node newest-cni-441323 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [7ed1c6caaa6011a6675bd69cbe62ead112bdce2566fe74d2da6abf7436c8d59f] <==
	{"level":"info","ts":"2025-12-17T08:34:50.828889Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-17T08:34:50.828912Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T08:34:50.828967Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T08:34:50.828966Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T08:34:50.829029Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-17T08:34:50.829043Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-17T08:34:50.829062Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T08:34:51.318552Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T08:34:51.318606Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T08:34:51.318683Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-17T08:34:51.318721Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:34:51.318740Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T08:34:51.319587Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-17T08:34:51.319617Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:34:51.319638Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T08:34:51.319648Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-17T08:34:51.320419Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:34:51.320415Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-441323 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T08:34:51.320443Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:34:51.320719Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:34:51.320754Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T08:34:51.322335Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:34:51.323098Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:34:51.324856Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-17T08:34:51.324857Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:34:57 up  2:17,  0 user,  load average: 2.94, 3.78, 2.85
	Linux newest-cni-441323 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f] <==
	I1217 08:34:53.828157       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:34:53.828446       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 08:34:53.828606       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:34:53.828636       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:34:53.828670       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:34:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:34:54.029924       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:34:54.030010       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:34:54.030027       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:34:54.030175       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:34:54.330193       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:34:54.330226       1 metrics.go:72] Registering metrics
	I1217 08:34:54.330289       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [a6408d47d2e421bffc0051deaf838418f66ed876c98037484db118e1436e4d90] <==
	I1217 08:34:52.307480       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 08:34:52.307613       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 08:34:52.307445       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:52.307453       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:52.308231       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:34:52.308291       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 08:34:52.308380       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:52.313240       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 08:34:52.316604       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 08:34:52.322835       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:52.322870       1 policy_source.go:248] refreshing policies
	I1217 08:34:52.352503       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:34:52.641799       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:34:52.677748       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:34:52.702926       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:34:52.713516       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:34:52.722926       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:34:52.777459       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.90.67"}
	I1217 08:34:52.792266       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.220.232"}
	I1217 08:34:53.210244       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 08:34:55.938325       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:34:56.038445       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:34:56.038446       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:34:56.089167       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:34:56.140117       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0dfcf1c84ee6b4ff17cb72331eb27697671d6674fcbae816b545365c03020bab] <==
	I1217 08:34:55.442685       1 range_allocator.go:177] "Sending events to api server"
	I1217 08:34:55.442732       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 08:34:55.442739       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:34:55.442737       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 08:34:55.442745       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.442823       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-441323"
	I1217 08:34:55.442890       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 08:34:55.440152       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.441197       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.439988       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440139       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.441207       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.441209       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440019       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440176       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.441139       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440052       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440186       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440084       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.449174       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.454524       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:34:55.540634       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.540659       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 08:34:55.540665       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 08:34:55.554987       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae] <==
	I1217 08:34:53.633343       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:34:53.713394       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:34:53.814573       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:53.814647       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 08:34:53.814761       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:34:53.834750       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:34:53.834823       1 server_linux.go:136] "Using iptables Proxier"
	I1217 08:34:53.840066       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:34:53.840631       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 08:34:53.840678       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:34:53.842913       1 config.go:200] "Starting service config controller"
	I1217 08:34:53.842940       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:34:53.842933       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:34:53.842959       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:34:53.842972       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:34:53.842979       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:34:53.843119       1 config.go:309] "Starting node config controller"
	I1217 08:34:53.843133       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:34:53.943115       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:34:53.943150       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:34:53.943169       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:34:53.943234       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [140066cadb701c9beca42376dbfaa93583dcfb70edbc22346ef728a1d76c46c6] <==
	I1217 08:34:51.110487       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:34:52.236755       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:34:52.236890       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:34:52.236906       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:34:52.236916       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:34:52.290446       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 08:34:52.290481       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:34:52.293949       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:34:52.294091       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:34:52.295587       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:34:52.294145       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:34:52.395970       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 08:34:52 newest-cni-441323 kubelet[678]: E1217 08:34:52.407159     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-441323\" already exists" pod="kube-system/kube-apiserver-newest-cni-441323"
	Dec 17 08:34:52 newest-cni-441323 kubelet[678]: E1217 08:34:52.407278     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-441323" containerName="kube-apiserver"
	Dec 17 08:34:52 newest-cni-441323 kubelet[678]: E1217 08:34:52.407563     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-441323\" already exists" pod="kube-system/kube-controller-manager-newest-cni-441323"
	Dec 17 08:34:52 newest-cni-441323 kubelet[678]: E1217 08:34:52.407749     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-441323" containerName="kube-controller-manager"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.228895     678 apiserver.go:52] "Watching apiserver"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.235460     678 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.247229     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1249690d-960e-4091-9a1a-0eebd4e957c6-xtables-lock\") pod \"kindnet-5mpr4\" (UID: \"1249690d-960e-4091-9a1a-0eebd4e957c6\") " pod="kube-system/kindnet-5mpr4"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.247292     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92bd18af-4b69-46fc-8dbb-d0fe791260b4-lib-modules\") pod \"kube-proxy-pp5v6\" (UID: \"92bd18af-4b69-46fc-8dbb-d0fe791260b4\") " pod="kube-system/kube-proxy-pp5v6"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.247316     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1249690d-960e-4091-9a1a-0eebd4e957c6-lib-modules\") pod \"kindnet-5mpr4\" (UID: \"1249690d-960e-4091-9a1a-0eebd4e957c6\") " pod="kube-system/kindnet-5mpr4"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.247352     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1249690d-960e-4091-9a1a-0eebd4e957c6-cni-cfg\") pod \"kindnet-5mpr4\" (UID: \"1249690d-960e-4091-9a1a-0eebd4e957c6\") " pod="kube-system/kindnet-5mpr4"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.247390     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92bd18af-4b69-46fc-8dbb-d0fe791260b4-xtables-lock\") pod \"kube-proxy-pp5v6\" (UID: \"92bd18af-4b69-46fc-8dbb-d0fe791260b4\") " pod="kube-system/kube-proxy-pp5v6"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.280966     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-441323" containerName="kube-controller-manager"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.281010     678 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-441323"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.281209     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-441323" containerName="kube-scheduler"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.281340     678 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-441323"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.289087     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-441323\" already exists" pod="kube-system/etcd-newest-cni-441323"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.289091     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-441323\" already exists" pod="kube-system/kube-apiserver-newest-cni-441323"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.289214     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-441323" containerName="etcd"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.289291     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-441323" containerName="kube-apiserver"
	Dec 17 08:34:54 newest-cni-441323 kubelet[678]: E1217 08:34:54.286806     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-441323" containerName="kube-apiserver"
	Dec 17 08:34:54 newest-cni-441323 kubelet[678]: E1217 08:34:54.287344     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-441323" containerName="etcd"
	Dec 17 08:34:54 newest-cni-441323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:34:54 newest-cni-441323 kubelet[678]: I1217 08:34:54.668605     678 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 17 08:34:54 newest-cni-441323 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:34:54 newest-cni-441323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-441323 -n newest-cni-441323
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-441323 -n newest-cni-441323: exit status 2 (336.452928ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-441323 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-mbqs4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-zffcn kubernetes-dashboard-b84665fb8-b755c
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-441323 describe pod coredns-7d764666f9-mbqs4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-zffcn kubernetes-dashboard-b84665fb8-b755c
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-441323 describe pod coredns-7d764666f9-mbqs4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-zffcn kubernetes-dashboard-b84665fb8-b755c: exit status 1 (66.626035ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-mbqs4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-zffcn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-b755c" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-441323 describe pod coredns-7d764666f9-mbqs4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-zffcn kubernetes-dashboard-b84665fb8-b755c: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-441323
helpers_test.go:244: (dbg) docker inspect newest-cni-441323:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4",
	        "Created": "2025-12-17T08:34:03.147489501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 909776,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T08:34:44.210348896Z",
	            "FinishedAt": "2025-12-17T08:34:43.338214669Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/hosts",
	        "LogPath": "/var/lib/docker/containers/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4/5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4-json.log",
	        "Name": "/newest-cni-441323",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-441323:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-441323",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e7ea243e76cba10682589cbff1fe6a5fb685681e60e102da89241ac251a62d4",
	                "LowerDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588-init/diff:/var/lib/docker/overlay2/9a6a29ec47dbaef6c12a8bb0c542fecd8dbef702a8499446c9020819d1011744/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bdbb9a9500b3153ca1a8ad96e8032e2aa4e0af83876a7756d2a334149e68588/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-441323",
	                "Source": "/var/lib/docker/volumes/newest-cni-441323/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-441323",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-441323",
	                "name.minikube.sigs.k8s.io": "newest-cni-441323",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f8d20a5888c12816e66eb1dac41b2fba0fbc976f064086f7cc40989546d9e374",
	            "SandboxKey": "/var/run/docker/netns/f8d20a5888c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33544"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33542"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33543"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-441323": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c3d2e6bc2ee52a92bd5fc401c42f5bb70038c4ef50a33f6f2359c529ab511ea2",
	                    "EndpointID": "34790b8903961489951cf69a5dfab3d65830004c6beecfe80ce573f9564352f0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ea:1c:cc:2a:cb:c2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-441323",
	                        "5e7ea243e76c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-441323 -n newest-cni-441323
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-441323 -n newest-cni-441323: exit status 2 (327.353406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-441323 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-441323 logs -n 25: (1.021700938s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ embed-certs-581631 image list --format=json                                                                                                                                                                                                        │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p embed-certs-581631 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ image   │ old-k8s-version-640910 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p old-k8s-version-640910 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ delete  │ -p old-k8s-version-640910                                                                                                                                                                                                                          │ old-k8s-version-640910       │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ start   │ -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:34 UTC │
	│ delete  │ -p embed-certs-581631                                                                                                                                                                                                                              │ embed-certs-581631           │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ image   │ no-preload-936988 image list --format=json                                                                                                                                                                                                         │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │ 17 Dec 25 08:33 UTC │
	│ pause   │ -p no-preload-936988 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:33 UTC │                     │
	│ delete  │ -p no-preload-936988                                                                                                                                                                                                                               │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ delete  │ -p no-preload-936988                                                                                                                                                                                                                               │ no-preload-936988            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ default-k8s-diff-port-225657 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ pause   │ -p default-k8s-diff-port-225657 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-225657                                                                                                                                                                                                                    │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-441323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-225657                                                                                                                                                                                                                    │ default-k8s-diff-port-225657 │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ stop    │ -p newest-cni-441323 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-441323 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ start   │ -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ image   │ newest-cni-441323 image list --format=json                                                                                                                                                                                                         │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │ 17 Dec 25 08:34 UTC │
	│ pause   │ -p newest-cni-441323 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-441323            │ jenkins │ v1.37.0 │ 17 Dec 25 08:34 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:34:43
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:34:43.974125  909575 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:34:43.974226  909575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:34:43.974231  909575 out.go:374] Setting ErrFile to fd 2...
	I1217 08:34:43.974234  909575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:34:43.974473  909575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:34:43.974950  909575 out.go:368] Setting JSON to false
	I1217 08:34:43.975994  909575 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8229,"bootTime":1765952255,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:34:43.976061  909575 start.go:143] virtualization: kvm guest
	I1217 08:34:43.978761  909575 out.go:179] * [newest-cni-441323] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:34:43.980658  909575 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:34:43.980717  909575 notify.go:221] Checking for updates...
	I1217 08:34:43.983768  909575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:34:43.985327  909575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:34:43.986980  909575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:34:43.988840  909575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:34:43.990672  909575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:34:43.992623  909575 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:43.993208  909575 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:34:44.017420  909575 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:34:44.017527  909575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:34:44.074200  909575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-17 08:34:44.063237382 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:34:44.074355  909575 docker.go:319] overlay module found
	I1217 08:34:44.076416  909575 out.go:179] * Using the docker driver based on existing profile
	I1217 08:34:44.077650  909575 start.go:309] selected driver: docker
	I1217 08:34:44.077668  909575 start.go:927] validating driver "docker" against &{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:34:44.077768  909575 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:34:44.078355  909575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:34:44.132994  909575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-17 08:34:44.123243604 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:34:44.133299  909575 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 08:34:44.133326  909575 cni.go:84] Creating CNI manager for ""
	I1217 08:34:44.133395  909575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:34:44.133428  909575 start.go:353] cluster config:
	{Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:34:44.135596  909575 out.go:179] * Starting "newest-cni-441323" primary control-plane node in "newest-cni-441323" cluster
	I1217 08:34:44.136911  909575 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 08:34:44.138501  909575 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 08:34:44.139879  909575 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:34:44.139929  909575 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 08:34:44.139951  909575 cache.go:65] Caching tarball of preloaded images
	I1217 08:34:44.140000  909575 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 08:34:44.140075  909575 preload.go:238] Found /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:34:44.140091  909575 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 08:34:44.140221  909575 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:34:44.162024  909575 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 08:34:44.162047  909575 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 08:34:44.162064  909575 cache.go:243] Successfully downloaded all kic artifacts
	I1217 08:34:44.162096  909575 start.go:360] acquireMachinesLock for newest-cni-441323: {Name:mk9498dbb1eb77dbf697c7e17cff718c09574836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:34:44.162152  909575 start.go:364] duration metric: took 38.334µs to acquireMachinesLock for "newest-cni-441323"
	I1217 08:34:44.162176  909575 start.go:96] Skipping create...Using existing machine configuration
	I1217 08:34:44.162184  909575 fix.go:54] fixHost starting: 
	I1217 08:34:44.162385  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:44.180334  909575 fix.go:112] recreateIfNeeded on newest-cni-441323: state=Stopped err=<nil>
	W1217 08:34:44.180368  909575 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 08:34:44.182369  909575 out.go:252] * Restarting existing docker container for "newest-cni-441323" ...
	I1217 08:34:44.182436  909575 cli_runner.go:164] Run: docker start newest-cni-441323
	I1217 08:34:44.436751  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:44.456512  909575 kic.go:432] container "newest-cni-441323" state is running.
	I1217 08:34:44.456915  909575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:44.476616  909575 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/config.json ...
	I1217 08:34:44.476926  909575 machine.go:94] provisionDockerMachine start ...
	I1217 08:34:44.477031  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:44.496801  909575 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:44.496940  909575 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1217 08:34:44.496955  909575 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:34:44.497749  909575 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59626->127.0.0.1:33540: read: connection reset by peer
	I1217 08:34:47.626911  909575 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-441323
	
	I1217 08:34:47.626947  909575 ubuntu.go:182] provisioning hostname "newest-cni-441323"
	I1217 08:34:47.627029  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:47.646911  909575 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:47.647030  909575 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1217 08:34:47.647044  909575 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-441323 && echo "newest-cni-441323" | sudo tee /etc/hostname
	I1217 08:34:47.785762  909575 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-441323
	
	I1217 08:34:47.785855  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:47.805254  909575 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:47.805353  909575 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1217 08:34:47.805387  909575 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-441323' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-441323/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-441323' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:34:47.932897  909575 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:34:47.932932  909575 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-552461/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-552461/.minikube}
	I1217 08:34:47.932971  909575 ubuntu.go:190] setting up certificates
	I1217 08:34:47.932994  909575 provision.go:84] configureAuth start
	I1217 08:34:47.933069  909575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:47.951754  909575 provision.go:143] copyHostCerts
	I1217 08:34:47.951850  909575 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem, removing ...
	I1217 08:34:47.951873  909575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem
	I1217 08:34:47.951962  909575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/ca.pem (1082 bytes)
	I1217 08:34:47.952126  909575 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem, removing ...
	I1217 08:34:47.952140  909575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem
	I1217 08:34:47.952185  909575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/cert.pem (1123 bytes)
	I1217 08:34:47.952287  909575 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem, removing ...
	I1217 08:34:47.952301  909575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem
	I1217 08:34:47.952348  909575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-552461/.minikube/key.pem (1675 bytes)
	I1217 08:34:47.952444  909575 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem org=jenkins.newest-cni-441323 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-441323]
	I1217 08:34:48.000138  909575 provision.go:177] copyRemoteCerts
	I1217 08:34:48.000216  909575 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:34:48.000295  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.019140  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:48.115224  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 08:34:48.133979  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 08:34:48.151866  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 08:34:48.170810  909575 provision.go:87] duration metric: took 237.785329ms to configureAuth
	I1217 08:34:48.170839  909575 ubuntu.go:206] setting minikube options for container-runtime
	I1217 08:34:48.171016  909575 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:48.171115  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.190217  909575 main.go:143] libmachine: Using SSH client type: native
	I1217 08:34:48.190354  909575 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1217 08:34:48.190377  909575 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:34:48.475905  909575 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:34:48.475936  909575 machine.go:97] duration metric: took 3.998991797s to provisionDockerMachine
	I1217 08:34:48.475948  909575 start.go:293] postStartSetup for "newest-cni-441323" (driver="docker")
	I1217 08:34:48.475961  909575 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:34:48.476032  909575 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:34:48.476079  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.495988  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:48.590777  909575 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:34:48.594819  909575 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 08:34:48.594851  909575 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 08:34:48.594862  909575 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/addons for local assets ...
	I1217 08:34:48.594918  909575 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-552461/.minikube/files for local assets ...
	I1217 08:34:48.594989  909575 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem -> 5560552.pem in /etc/ssl/certs
	I1217 08:34:48.595086  909575 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 08:34:48.603394  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:34:48.621570  909575 start.go:296] duration metric: took 145.603503ms for postStartSetup
	I1217 08:34:48.621684  909575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:34:48.621752  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.640688  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:48.731229  909575 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 08:34:48.736624  909575 fix.go:56] duration metric: took 4.574429564s for fixHost
	I1217 08:34:48.737642  909575 start.go:83] releasing machines lock for "newest-cni-441323", held for 4.5754524s
	I1217 08:34:48.738010  909575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-441323
	I1217 08:34:48.757035  909575 ssh_runner.go:195] Run: cat /version.json
	I1217 08:34:48.757091  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.757150  909575 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:34:48.757235  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:48.777351  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:48.777709  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:48.921987  909575 ssh_runner.go:195] Run: systemctl --version
	I1217 08:34:48.928932  909575 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:34:48.966185  909575 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:34:48.971360  909575 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:34:48.971433  909575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:34:48.981112  909575 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 08:34:48.981143  909575 start.go:496] detecting cgroup driver to use...
	I1217 08:34:48.981180  909575 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 08:34:48.981241  909575 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:34:48.996835  909575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:34:49.010627  909575 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:34:49.010699  909575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:34:49.026182  909575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:34:49.039313  909575 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:34:49.120242  909575 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:34:49.201888  909575 docker.go:234] disabling docker service ...
	I1217 08:34:49.201964  909575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:34:49.216921  909575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:34:49.230195  909575 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:34:49.314315  909575 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:34:49.392815  909575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:34:49.406135  909575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:34:49.421952  909575 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:34:49.422011  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.431936  909575 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 08:34:49.431998  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.441904  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.451499  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.461251  909575 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:34:49.470255  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.480021  909575 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.489685  909575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:34:49.499548  909575 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:34:49.508072  909575 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:34:49.516047  909575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:49.596394  909575 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:34:49.737657  909575 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:34:49.737749  909575 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:34:49.742160  909575 start.go:564] Will wait 60s for crictl version
	I1217 08:34:49.742226  909575 ssh_runner.go:195] Run: which crictl
	I1217 08:34:49.746104  909575 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 08:34:49.773351  909575 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 08:34:49.773433  909575 ssh_runner.go:195] Run: crio --version
	I1217 08:34:49.803051  909575 ssh_runner.go:195] Run: crio --version
	I1217 08:34:49.834977  909575 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 08:34:49.836555  909575 cli_runner.go:164] Run: docker network inspect newest-cni-441323 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 08:34:49.856401  909575 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 08:34:49.860870  909575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:34:49.873288  909575 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 08:34:49.874699  909575 kubeadm.go:884] updating cluster {Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:34:49.874862  909575 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 08:34:49.874925  909575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:34:49.909258  909575 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:34:49.909281  909575 crio.go:433] Images already preloaded, skipping extraction
	I1217 08:34:49.909343  909575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:34:49.936302  909575 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:34:49.936328  909575 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:34:49.936336  909575 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 08:34:49.936449  909575 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-441323 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:34:49.936526  909575 ssh_runner.go:195] Run: crio config
	I1217 08:34:49.983906  909575 cni.go:84] Creating CNI manager for ""
	I1217 08:34:49.983930  909575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 08:34:49.983950  909575 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 08:34:49.983977  909575 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-441323 NodeName:newest-cni-441323 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:34:49.984118  909575 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-441323"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:34:49.984186  909575 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 08:34:49.992902  909575 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:34:49.992973  909575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:34:50.001410  909575 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 08:34:50.015127  909575 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 08:34:50.028399  909575 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1217 08:34:50.042078  909575 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 08:34:50.046352  909575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:34:50.057613  909575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:50.137246  909575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:34:50.162291  909575 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323 for IP: 192.168.76.2
	I1217 08:34:50.162322  909575 certs.go:195] generating shared ca certs ...
	I1217 08:34:50.162344  909575 certs.go:227] acquiring lock for ca certs: {Name:mk995ee501e1f5869ce98a67d683c355f543842f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:50.162552  909575 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key
	I1217 08:34:50.162611  909575 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key
	I1217 08:34:50.162622  909575 certs.go:257] generating profile certs ...
	I1217 08:34:50.162705  909575 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/client.key
	I1217 08:34:50.162778  909575 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key.20418f41
	I1217 08:34:50.162814  909575 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key
	I1217 08:34:50.162915  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem (1338 bytes)
	W1217 08:34:50.162963  909575 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055_empty.pem, impossibly tiny 0 bytes
	I1217 08:34:50.162976  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 08:34:50.163005  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/ca.pem (1082 bytes)
	I1217 08:34:50.163029  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:34:50.163053  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/certs/key.pem (1675 bytes)
	I1217 08:34:50.163094  909575 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem (1708 bytes)
	I1217 08:34:50.163714  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:34:50.183114  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:34:50.203501  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:34:50.223916  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:34:50.247994  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 08:34:50.268287  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 08:34:50.286992  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:34:50.305590  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/newest-cni-441323/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:34:50.323843  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/certs/556055.pem --> /usr/share/ca-certificates/556055.pem (1338 bytes)
	I1217 08:34:50.341927  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/ssl/certs/5560552.pem --> /usr/share/ca-certificates/5560552.pem (1708 bytes)
	I1217 08:34:50.361686  909575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:34:50.380653  909575 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:34:50.393954  909575 ssh_runner.go:195] Run: openssl version
	I1217 08:34:50.400557  909575 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5560552.pem
	I1217 08:34:50.409202  909575 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5560552.pem /etc/ssl/certs/5560552.pem
	I1217 08:34:50.417927  909575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5560552.pem
	I1217 08:34:50.422196  909575 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 07:58 /usr/share/ca-certificates/5560552.pem
	I1217 08:34:50.422258  909575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5560552.pem
	I1217 08:34:50.457282  909575 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 08:34:50.465771  909575 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:50.473818  909575 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:34:50.481895  909575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:50.486087  909575 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 07:50 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:50.486173  909575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:34:50.520666  909575 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:34:50.528829  909575 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/556055.pem
	I1217 08:34:50.537218  909575 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/556055.pem /etc/ssl/certs/556055.pem
	I1217 08:34:50.546003  909575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/556055.pem
	I1217 08:34:50.550242  909575 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 07:58 /usr/share/ca-certificates/556055.pem
	I1217 08:34:50.550301  909575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/556055.pem
	I1217 08:34:50.585456  909575 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 08:34:50.594268  909575 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:34:50.599062  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 08:34:50.633651  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 08:34:50.669122  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 08:34:50.717007  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 08:34:50.759373  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 08:34:50.813476  909575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 08:34:50.860731  909575 kubeadm.go:401] StartCluster: {Name:newest-cni-441323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-441323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:34:50.860846  909575 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:34:50.860912  909575 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:34:50.893785  909575 cri.go:89] found id: "7ed1c6caaa6011a6675bd69cbe62ead112bdce2566fe74d2da6abf7436c8d59f"
	I1217 08:34:50.893812  909575 cri.go:89] found id: "0dfcf1c84ee6b4ff17cb72331eb27697671d6674fcbae816b545365c03020bab"
	I1217 08:34:50.893818  909575 cri.go:89] found id: "a6408d47d2e421bffc0051deaf838418f66ed876c98037484db118e1436e4d90"
	I1217 08:34:50.893842  909575 cri.go:89] found id: "140066cadb701c9beca42376dbfaa93583dcfb70edbc22346ef728a1d76c46c6"
	I1217 08:34:50.893847  909575 cri.go:89] found id: ""
	I1217 08:34:50.893893  909575 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 08:34:50.906570  909575 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T08:34:50Z" level=error msg="open /run/runc: no such file or directory"
	I1217 08:34:50.906654  909575 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:34:50.915626  909575 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 08:34:50.915682  909575 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 08:34:50.915730  909575 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 08:34:50.923627  909575 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:34:50.924113  909575 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-441323" does not appear in /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:34:50.924242  909575 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-552461/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-441323" cluster setting kubeconfig missing "newest-cni-441323" context setting]
	I1217 08:34:50.924543  909575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:50.925930  909575 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 08:34:50.934379  909575 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 08:34:50.934425  909575 kubeadm.go:602] duration metric: took 18.736248ms to restartPrimaryControlPlane
	I1217 08:34:50.934437  909575 kubeadm.go:403] duration metric: took 73.718996ms to StartCluster
	I1217 08:34:50.934464  909575 settings.go:142] acquiring lock: {Name:mk429c17321b6ae595fd8cb447c6366f97046f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:50.934572  909575 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:34:50.935329  909575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/kubeconfig: {Name:mka984ee4692e4a55dd22d5fe5848ec75c38cc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:34:50.935627  909575 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:34:50.935719  909575 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 08:34:50.935851  909575 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-441323"
	I1217 08:34:50.935876  909575 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-441323"
	W1217 08:34:50.935885  909575 addons.go:248] addon storage-provisioner should already be in state true
	I1217 08:34:50.935908  909575 config.go:182] Loaded profile config "newest-cni-441323": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:34:50.935918  909575 host.go:66] Checking if "newest-cni-441323" exists ...
	I1217 08:34:50.935919  909575 addons.go:70] Setting dashboard=true in profile "newest-cni-441323"
	I1217 08:34:50.935934  909575 addons.go:239] Setting addon dashboard=true in "newest-cni-441323"
	W1217 08:34:50.935942  909575 addons.go:248] addon dashboard should already be in state true
	I1217 08:34:50.935959  909575 addons.go:70] Setting default-storageclass=true in profile "newest-cni-441323"
	I1217 08:34:50.935968  909575 host.go:66] Checking if "newest-cni-441323" exists ...
	I1217 08:34:50.935984  909575 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-441323"
	I1217 08:34:50.936276  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:50.936442  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:50.936447  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:50.941762  909575 out.go:179] * Verifying Kubernetes components...
	I1217 08:34:50.943672  909575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:34:50.963881  909575 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:34:50.965331  909575 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 08:34:50.965414  909575 addons.go:239] Setting addon default-storageclass=true in "newest-cni-441323"
	W1217 08:34:50.965433  909575 addons.go:248] addon default-storageclass should already be in state true
	I1217 08:34:50.965463  909575 host.go:66] Checking if "newest-cni-441323" exists ...
	I1217 08:34:50.965409  909575 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:34:50.965516  909575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:34:50.965583  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:50.966041  909575 cli_runner.go:164] Run: docker container inspect newest-cni-441323 --format={{.State.Status}}
	I1217 08:34:50.968042  909575 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 08:34:50.969375  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 08:34:50.969397  909575 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 08:34:50.969468  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:51.003274  909575 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:34:51.003302  909575 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:34:51.003362  909575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-441323
	I1217 08:34:51.011611  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:51.017075  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:51.029037  909575 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/newest-cni-441323/id_ed25519 Username:docker}
	I1217 08:34:51.086155  909575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:34:51.100079  909575 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:34:51.100155  909575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:34:51.112216  909575 api_server.go:72] duration metric: took 176.543858ms to wait for apiserver process to appear ...
	I1217 08:34:51.112246  909575 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:34:51.112271  909575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:51.123228  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 08:34:51.123251  909575 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 08:34:51.130424  909575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:34:51.135995  909575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:34:51.137678  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 08:34:51.137705  909575 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 08:34:51.153086  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 08:34:51.153120  909575 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 08:34:51.169179  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 08:34:51.169208  909575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 08:34:51.183329  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 08:34:51.183362  909575 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 08:34:51.197449  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 08:34:51.197486  909575 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 08:34:51.210488  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 08:34:51.210514  909575 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 08:34:51.223287  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 08:34:51.223310  909575 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 08:34:51.236457  909575 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 08:34:51.236484  909575 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 08:34:51.249908  909575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 08:34:52.224625  909575 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 08:34:52.224666  909575 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 08:34:52.224685  909575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:52.283765  909575 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 08:34:52.283802  909575 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 08:34:52.612812  909575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:52.617385  909575 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:34:52.617426  909575 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:34:52.848569  909575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.718085858s)
	I1217 08:34:52.848626  909575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.712592405s)
	I1217 08:34:52.848735  909575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.598782878s)
	I1217 08:34:52.850819  909575 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-441323 addons enable metrics-server
	
	I1217 08:34:52.860764  909575 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 08:34:52.862706  909575 addons.go:530] duration metric: took 1.92699964s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 08:34:53.112926  909575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:53.117200  909575 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 08:34:53.117228  909575 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 08:34:53.612716  909575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 08:34:53.617688  909575 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 08:34:53.618930  909575 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 08:34:53.618962  909575 api_server.go:131] duration metric: took 2.506707504s to wait for apiserver health ...
	I1217 08:34:53.618973  909575 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:34:53.622879  909575 system_pods.go:59] 8 kube-system pods found
	I1217 08:34:53.622925  909575 system_pods.go:61] "coredns-7d764666f9-mbqs4" [3b7c3c61-8c2e-48ea-92b5-1af40280abb5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 08:34:53.622941  909575 system_pods.go:61] "etcd-newest-cni-441323" [6a0673c4-7e59-496d-90bf-fb6a7588302a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 08:34:53.622956  909575 system_pods.go:61] "kindnet-5mpr4" [1249690d-960e-4091-9a1a-0eebd4e957c6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 08:34:53.622967  909575 system_pods.go:61] "kube-apiserver-newest-cni-441323" [476ab60a-16ad-45d2-8fa6-ac1163efeb38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 08:34:53.622983  909575 system_pods.go:61] "kube-controller-manager-newest-cni-441323" [fc64d625-e270-4987-a8d2-0daa3bb0e059] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 08:34:53.622995  909575 system_pods.go:61] "kube-proxy-pp5v6" [92bd18af-4b69-46fc-8dbb-d0fe791260b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 08:34:53.623003  909575 system_pods.go:61] "kube-scheduler-newest-cni-441323" [32205d98-c144-4d9f-98c4-aba22a024602] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 08:34:53.623013  909575 system_pods.go:61] "storage-provisioner" [f0eed8b6-90b3-4a5f-8f84-dd9ed3415dd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 08:34:53.623022  909575 system_pods.go:74] duration metric: took 4.041638ms to wait for pod list to return data ...
	I1217 08:34:53.623035  909575 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:34:53.625728  909575 default_sa.go:45] found service account: "default"
	I1217 08:34:53.625752  909575 default_sa.go:55] duration metric: took 2.709295ms for default service account to be created ...
	I1217 08:34:53.625766  909575 kubeadm.go:587] duration metric: took 2.690104822s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 08:34:53.625785  909575 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:34:53.628443  909575 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 08:34:53.628475  909575 node_conditions.go:123] node cpu capacity is 8
	I1217 08:34:53.628493  909575 node_conditions.go:105] duration metric: took 2.70297ms to run NodePressure ...
	I1217 08:34:53.628510  909575 start.go:242] waiting for startup goroutines ...
	I1217 08:34:53.628524  909575 start.go:247] waiting for cluster config update ...
	I1217 08:34:53.628572  909575 start.go:256] writing updated cluster config ...
	I1217 08:34:53.628931  909575 ssh_runner.go:195] Run: rm -f paused
	I1217 08:34:53.680173  909575 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-rc.1 (minor skew: 1)
	I1217 08:34:53.682573  909575 out.go:179] * Done! kubectl is now configured to use "newest-cni-441323" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.54008623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.541005955Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fa16709e-3ede-4428-afe2-ff9823db7144 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.542899744Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.543360922Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=81a51413-9da6-41ce-8c73-fc274f57c88b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.543645251Z" level=info msg="Ran pod sandbox 426a8ee3e6ebe315fd2c64df43dca2244b79f9ed177cae0ee03e83f04c24f926 with infra container: kube-system/kindnet-5mpr4/POD" id=fa16709e-3ede-4428-afe2-ff9823db7144 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.545034359Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=193e90aa-4b84-4902-9588-123c64973490 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.545098302Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.545955354Z" level=info msg="Ran pod sandbox 846b70a399e711487066105d3814548965f4709258b6c4862f9cae9fbcf7f2cc with infra container: kube-system/kube-proxy-pp5v6/POD" id=81a51413-9da6-41ce-8c73-fc274f57c88b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.546030933Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=2a63afe3-9d2b-42a7-84d1-bcac8129353c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.547025863Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ba478203-5bb7-4059-ac95-28f58b14a764 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.547260218Z" level=info msg="Creating container: kube-system/kindnet-5mpr4/kindnet-cni" id=fb3d3201-79e7-4040-b50c-7d6548233fa7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.547367496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.548388603Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=88482c8a-a053-4b6e-bc7e-c65f860b5c43 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.552301175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.552915713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.552982566Z" level=info msg="Creating container: kube-system/kube-proxy-pp5v6/kube-proxy" id=f27f9f85-7f85-4ad5-9dc3-9c01f820cb05 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.553108216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.557896823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.558620505Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.583375069Z" level=info msg="Created container aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f: kube-system/kindnet-5mpr4/kindnet-cni" id=fb3d3201-79e7-4040-b50c-7d6548233fa7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.584180327Z" level=info msg="Starting container: aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f" id=341f8e3c-abb5-45ab-9b16-6914db08527e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.586397448Z" level=info msg="Started container" PID=1052 containerID=aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f description=kube-system/kindnet-5mpr4/kindnet-cni id=341f8e3c-abb5-45ab-9b16-6914db08527e name=/runtime.v1.RuntimeService/StartContainer sandboxID=426a8ee3e6ebe315fd2c64df43dca2244b79f9ed177cae0ee03e83f04c24f926
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.588014548Z" level=info msg="Created container d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae: kube-system/kube-proxy-pp5v6/kube-proxy" id=f27f9f85-7f85-4ad5-9dc3-9c01f820cb05 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.58945754Z" level=info msg="Starting container: d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae" id=7a27f06d-f5da-4e8b-8c12-b0a9a40d964d name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 08:34:53 newest-cni-441323 crio[522]: time="2025-12-17T08:34:53.59217454Z" level=info msg="Started container" PID=1053 containerID=d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae description=kube-system/kube-proxy-pp5v6/kube-proxy id=7a27f06d-f5da-4e8b-8c12-b0a9a40d964d name=/runtime.v1.RuntimeService/StartContainer sandboxID=846b70a399e711487066105d3814548965f4709258b6c4862f9cae9fbcf7f2cc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d304fcf952831       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   5 seconds ago       Running             kube-proxy                1                   846b70a399e71       kube-proxy-pp5v6                            kube-system
	aaf8dff0b9a3f       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   5 seconds ago       Running             kindnet-cni               1                   426a8ee3e6ebe       kindnet-5mpr4                               kube-system
	7ed1c6caaa601       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   8 seconds ago       Running             etcd                      1                   fff7850f84de1       etcd-newest-cni-441323                      kube-system
	0dfcf1c84ee6b       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   8 seconds ago       Running             kube-controller-manager   1                   6498408ada328       kube-controller-manager-newest-cni-441323   kube-system
	a6408d47d2e42       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   8 seconds ago       Running             kube-apiserver            1                   beb7f18493c11       kube-apiserver-newest-cni-441323            kube-system
	140066cadb701       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   8 seconds ago       Running             kube-scheduler            1                   05e2f8c33163e       kube-scheduler-newest-cni-441323            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-441323
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-441323
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=newest-cni-441323
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_34_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:34:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-441323
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:34:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:34:52 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:34:52 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:34:52 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 08:34:52 +0000   Wed, 17 Dec 2025 08:34:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-441323
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                6a252122-c552-42fb-8ce7-584cc3dce1f6
	  Boot ID:                    844ed6b2-ad52-419a-a4f4-f8862c237177
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-441323                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-5mpr4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      37s
	  kube-system                 kube-apiserver-newest-cni-441323             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-newest-cni-441323    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-pp5v6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-newest-cni-441323             100m (1%)     0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  38s   node-controller  Node newest-cni-441323 event: Registered Node newest-cni-441323 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-441323 event: Registered Node newest-cni-441323 in Controller
	
	
	==> dmesg <==
	[  +0.000029] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 07:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 2a 9c 3f d7 9b b2 83 f0 0e 19 fe 08 00
	[Dec17 08:30] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[ +25.000295] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.667639] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 1c 1d b9 7b 2c 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 65 84 da 3b 08 06
	[Dec17 08:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff d6 5c 43 51 9f 38 08 06
	[  +0.000867] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 00 2f b1 cb e5 08 06
	[  +0.307764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 a5 7c b7 9e 81 08 06
	[  +0.000443] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 42 7a ef f7 10 08 06
	[  +9.956031] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	[ +20.051630] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa f6 e0 dd 6a 66 08 06
	[  +0.001873] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b3 08 1e 93 cc 08 06
	
	
	==> etcd [7ed1c6caaa6011a6675bd69cbe62ead112bdce2566fe74d2da6abf7436c8d59f] <==
	{"level":"info","ts":"2025-12-17T08:34:50.828889Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-17T08:34:50.828912Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T08:34:50.828967Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T08:34:50.828966Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T08:34:50.829029Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-17T08:34:50.829043Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-17T08:34:50.829062Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T08:34:51.318552Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T08:34:51.318606Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T08:34:51.318683Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-17T08:34:51.318721Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:34:51.318740Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T08:34:51.319587Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-17T08:34:51.319617Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:34:51.319638Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T08:34:51.319648Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-17T08:34:51.320419Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:34:51.320415Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-441323 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T08:34:51.320443Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:34:51.320719Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:34:51.320754Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T08:34:51.322335Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:34:51.323098Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:34:51.324856Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-17T08:34:51.324857Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:34:59 up  2:17,  0 user,  load average: 2.87, 3.75, 2.84
	Linux newest-cni-441323 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aaf8dff0b9a3f182279aa193253d4ddeaae877d9968cd5ceff6157379ec7106f] <==
	I1217 08:34:53.828157       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 08:34:53.828446       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 08:34:53.828606       1 main.go:148] setting mtu 1500 for CNI 
	I1217 08:34:53.828636       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 08:34:53.828670       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T08:34:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 08:34:54.029924       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 08:34:54.030010       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 08:34:54.030027       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 08:34:54.030175       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 08:34:54.330193       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 08:34:54.330226       1 metrics.go:72] Registering metrics
	I1217 08:34:54.330289       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [a6408d47d2e421bffc0051deaf838418f66ed876c98037484db118e1436e4d90] <==
	I1217 08:34:52.307480       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 08:34:52.307613       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 08:34:52.307445       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:52.307453       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:52.308231       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 08:34:52.308291       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 08:34:52.308380       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:52.313240       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 08:34:52.316604       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 08:34:52.322835       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:52.322870       1 policy_source.go:248] refreshing policies
	I1217 08:34:52.352503       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:34:52.641799       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:34:52.677748       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:34:52.702926       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:34:52.713516       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:34:52.722926       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:34:52.777459       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.90.67"}
	I1217 08:34:52.792266       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.220.232"}
	I1217 08:34:53.210244       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 08:34:55.938325       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:34:56.038445       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:34:56.038446       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:34:56.089167       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:34:56.140117       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0dfcf1c84ee6b4ff17cb72331eb27697671d6674fcbae816b545365c03020bab] <==
	I1217 08:34:55.442685       1 range_allocator.go:177] "Sending events to api server"
	I1217 08:34:55.442732       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 08:34:55.442739       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:34:55.442737       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 08:34:55.442745       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.442823       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-441323"
	I1217 08:34:55.442890       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 08:34:55.440152       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.441197       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.439988       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440139       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.441207       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.441209       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440019       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440176       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.441139       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440052       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440186       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.440084       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.449174       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.454524       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:34:55.540634       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:55.540659       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 08:34:55.540665       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 08:34:55.554987       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [d304fcf95283168adc1e473d8b1c910ff86a6fba798ef23493db36296290f1ae] <==
	I1217 08:34:53.633343       1 server_linux.go:53] "Using iptables proxy"
	I1217 08:34:53.713394       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:34:53.814573       1 shared_informer.go:377] "Caches are synced"
	I1217 08:34:53.814647       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 08:34:53.814761       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:34:53.834750       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 08:34:53.834823       1 server_linux.go:136] "Using iptables Proxier"
	I1217 08:34:53.840066       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:34:53.840631       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 08:34:53.840678       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:34:53.842913       1 config.go:200] "Starting service config controller"
	I1217 08:34:53.842940       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:34:53.842933       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:34:53.842959       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:34:53.842972       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:34:53.842979       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:34:53.843119       1 config.go:309] "Starting node config controller"
	I1217 08:34:53.843133       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:34:53.943115       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:34:53.943150       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:34:53.943169       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:34:53.943234       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [140066cadb701c9beca42376dbfaa93583dcfb70edbc22346ef728a1d76c46c6] <==
	I1217 08:34:51.110487       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:34:52.236755       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:34:52.236890       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:34:52.236906       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:34:52.236916       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:34:52.290446       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 08:34:52.290481       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:34:52.293949       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:34:52.294091       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:34:52.295587       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:34:52.294145       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:34:52.395970       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 08:34:52 newest-cni-441323 kubelet[678]: E1217 08:34:52.407159     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-441323\" already exists" pod="kube-system/kube-apiserver-newest-cni-441323"
	Dec 17 08:34:52 newest-cni-441323 kubelet[678]: E1217 08:34:52.407278     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-441323" containerName="kube-apiserver"
	Dec 17 08:34:52 newest-cni-441323 kubelet[678]: E1217 08:34:52.407563     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-441323\" already exists" pod="kube-system/kube-controller-manager-newest-cni-441323"
	Dec 17 08:34:52 newest-cni-441323 kubelet[678]: E1217 08:34:52.407749     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-441323" containerName="kube-controller-manager"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.228895     678 apiserver.go:52] "Watching apiserver"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.235460     678 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.247229     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1249690d-960e-4091-9a1a-0eebd4e957c6-xtables-lock\") pod \"kindnet-5mpr4\" (UID: \"1249690d-960e-4091-9a1a-0eebd4e957c6\") " pod="kube-system/kindnet-5mpr4"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.247292     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92bd18af-4b69-46fc-8dbb-d0fe791260b4-lib-modules\") pod \"kube-proxy-pp5v6\" (UID: \"92bd18af-4b69-46fc-8dbb-d0fe791260b4\") " pod="kube-system/kube-proxy-pp5v6"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.247316     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1249690d-960e-4091-9a1a-0eebd4e957c6-lib-modules\") pod \"kindnet-5mpr4\" (UID: \"1249690d-960e-4091-9a1a-0eebd4e957c6\") " pod="kube-system/kindnet-5mpr4"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.247352     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1249690d-960e-4091-9a1a-0eebd4e957c6-cni-cfg\") pod \"kindnet-5mpr4\" (UID: \"1249690d-960e-4091-9a1a-0eebd4e957c6\") " pod="kube-system/kindnet-5mpr4"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.247390     678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92bd18af-4b69-46fc-8dbb-d0fe791260b4-xtables-lock\") pod \"kube-proxy-pp5v6\" (UID: \"92bd18af-4b69-46fc-8dbb-d0fe791260b4\") " pod="kube-system/kube-proxy-pp5v6"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.280966     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-441323" containerName="kube-controller-manager"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.281010     678 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-441323"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.281209     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-441323" containerName="kube-scheduler"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: I1217 08:34:53.281340     678 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-441323"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.289087     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-441323\" already exists" pod="kube-system/etcd-newest-cni-441323"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.289091     678 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-441323\" already exists" pod="kube-system/kube-apiserver-newest-cni-441323"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.289214     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-441323" containerName="etcd"
	Dec 17 08:34:53 newest-cni-441323 kubelet[678]: E1217 08:34:53.289291     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-441323" containerName="kube-apiserver"
	Dec 17 08:34:54 newest-cni-441323 kubelet[678]: E1217 08:34:54.286806     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-441323" containerName="kube-apiserver"
	Dec 17 08:34:54 newest-cni-441323 kubelet[678]: E1217 08:34:54.287344     678 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-441323" containerName="etcd"
	Dec 17 08:34:54 newest-cni-441323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 08:34:54 newest-cni-441323 kubelet[678]: I1217 08:34:54.668605     678 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 17 08:34:54 newest-cni-441323 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 08:34:54 newest-cni-441323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-441323 -n newest-cni-441323
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-441323 -n newest-cni-441323: exit status 2 (350.040528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-441323 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-mbqs4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-zffcn kubernetes-dashboard-b84665fb8-b755c
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-441323 describe pod coredns-7d764666f9-mbqs4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-zffcn kubernetes-dashboard-b84665fb8-b755c
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-441323 describe pod coredns-7d764666f9-mbqs4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-zffcn kubernetes-dashboard-b84665fb8-b755c: exit status 1 (66.986452ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-mbqs4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-zffcn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-b755c" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-441323 describe pod coredns-7d764666f9-mbqs4 storage-provisioner dashboard-metrics-scraper-867fb5f87b-zffcn kubernetes-dashboard-b84665fb8-b755c: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.61s)

                                                
                                    

Test pass (353/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.42
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.27
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.17
12 TestDownloadOnly/v1.34.3/json-events 10.06
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.08
18 TestDownloadOnly/v1.34.3/DeleteAll 0.24
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-rc.1/json-events 17.24
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.25
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.16
29 TestDownloadOnlyKic 0.44
30 TestBinaryMirror 0.91
31 TestOffline 66.24
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 104.28
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 8.46
57 TestAddons/StoppedEnableDisable 18.63
58 TestCertOptions 26.82
59 TestCertExpiration 209.46
61 TestForceSystemdFlag 25.79
62 TestForceSystemdEnv 29.11
67 TestErrorSpam/setup 22.47
68 TestErrorSpam/start 0.7
69 TestErrorSpam/status 0.97
70 TestErrorSpam/pause 6.5
71 TestErrorSpam/unpause 5.96
72 TestErrorSpam/stop 12.59
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 42.94
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.35
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.64
84 TestFunctional/serial/CacheCmd/cache/add_local 2.12
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
89 TestFunctional/serial/CacheCmd/cache/delete 0.14
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 39.3
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.29
95 TestFunctional/serial/LogsFileCmd 1.33
96 TestFunctional/serial/InvalidService 5.49
98 TestFunctional/parallel/ConfigCmd 0.51
99 TestFunctional/parallel/DashboardCmd 8.95
100 TestFunctional/parallel/DryRun 0.47
101 TestFunctional/parallel/InternationalLanguage 0.19
102 TestFunctional/parallel/StatusCmd 1.02
106 TestFunctional/parallel/ServiceCmdConnect 9.73
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 19.27
110 TestFunctional/parallel/SSHCmd 0.67
111 TestFunctional/parallel/CpCmd 1.95
112 TestFunctional/parallel/MySQL 27.87
113 TestFunctional/parallel/FileSync 0.3
114 TestFunctional/parallel/CertSync 1.8
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
122 TestFunctional/parallel/License 0.36
123 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.23
129 TestFunctional/parallel/ServiceCmd/List 0.5
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
132 TestFunctional/parallel/ServiceCmd/Format 0.36
133 TestFunctional/parallel/ServiceCmd/URL 0.36
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
143 TestFunctional/parallel/Version/short 0.09
144 TestFunctional/parallel/Version/components 0.83
145 TestFunctional/parallel/ImageCommands/ImageListShort 1.58
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
149 TestFunctional/parallel/ImageCommands/ImageBuild 3.89
150 TestFunctional/parallel/ImageCommands/Setup 1.92
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
152 TestFunctional/parallel/ProfileCmd/profile_list 0.6
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
155 TestFunctional/parallel/MountCmd/any-port 12.37
156 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.57
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.78
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
162 TestFunctional/parallel/MountCmd/specific-port 1.98
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.21
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 41.12
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 6.86
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.07
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.84
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 2.13
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.08
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.73
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.15
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.14
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.13
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 62.74
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.34
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.35
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 5.34
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.5
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 12.31
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.59
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.27
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 1.07
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 11.72
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.17
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 28.41
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.7
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.93
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 23.59
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.32
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.96
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.65
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.25
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.09
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.56
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.25
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.26
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.26
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.3
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.78
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.9
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.17
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.16
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.17
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.41
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 1.13
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 19.31
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 1.58
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.53
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.72
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.42
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 9.15
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.46
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.51
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 7.24
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.45
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 2.26
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 1.82
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.87
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 1.97
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.57
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.56
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.57
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 143.41
266 TestMultiControlPlane/serial/DeployApp 5.53
267 TestMultiControlPlane/serial/PingHostFromPods 1.12
268 TestMultiControlPlane/serial/AddWorkerNode 32.83
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
271 TestMultiControlPlane/serial/CopyFile 17.89
272 TestMultiControlPlane/serial/StopSecondaryNode 19.4
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
274 TestMultiControlPlane/serial/RestartSecondaryNode 14.65
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 112.29
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.73
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
279 TestMultiControlPlane/serial/StopCluster 43.23
280 TestMultiControlPlane/serial/RestartCluster 56.06
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
282 TestMultiControlPlane/serial/AddSecondaryNode 42.21
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
288 TestJSONOutput/start/Command 43.73
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.24
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.31
313 TestKicCustomNetwork/create_custom_network 36.6
314 TestKicCustomNetwork/use_default_bridge_network 26.44
315 TestKicExistingNetwork 26.36
316 TestKicCustomSubnet 27.61
317 TestKicStaticIP 27.19
318 TestMainNoArgs 0.07
319 TestMinikubeProfile 50.85
322 TestMountStart/serial/StartWithMountFirst 7.8
323 TestMountStart/serial/VerifyMountFirst 0.28
324 TestMountStart/serial/StartWithMountSecond 7.96
325 TestMountStart/serial/VerifyMountSecond 0.28
326 TestMountStart/serial/DeleteFirst 1.7
327 TestMountStart/serial/VerifyMountPostDelete 0.28
328 TestMountStart/serial/Stop 1.27
329 TestMountStart/serial/RestartStopped 8.01
330 TestMountStart/serial/VerifyMountPostStop 0.28
333 TestMultiNode/serial/FreshStart2Nodes 73.08
334 TestMultiNode/serial/DeployApp2Nodes 4.28
335 TestMultiNode/serial/PingHostFrom2Pods 0.77
336 TestMultiNode/serial/AddNode 28.23
337 TestMultiNode/serial/MultiNodeLabels 0.07
338 TestMultiNode/serial/ProfileList 0.67
339 TestMultiNode/serial/CopyFile 10.16
340 TestMultiNode/serial/StopNode 2.32
341 TestMultiNode/serial/StartAfterStop 7.29
342 TestMultiNode/serial/RestartKeepsNodes 82.09
343 TestMultiNode/serial/DeleteNode 5.35
344 TestMultiNode/serial/StopMultiNode 28.72
345 TestMultiNode/serial/RestartMultiNode 45.98
346 TestMultiNode/serial/ValidateNameConflict 27.52
351 TestPreload 112.49
353 TestScheduledStopUnix 100.76
356 TestInsufficientStorage 11.94
357 TestRunningBinaryUpgrade 307.24
359 TestKubernetesUpgrade 312.3
360 TestMissingContainerUpgrade 100.4
362 TestPause/serial/Start 63.27
363 TestStoppedBinaryUpgrade/Setup 3.84
364 TestStoppedBinaryUpgrade/Upgrade 80.18
365 TestPause/serial/SecondStartNoReconfiguration 6.79
367 TestStoppedBinaryUpgrade/MinikubeLogs 1.43
369 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
370 TestNoKubernetes/serial/StartWithK8s 26.37
378 TestNetworkPlugins/group/false 4.18
382 TestNoKubernetes/serial/StartWithStopK8s 8.55
383 TestNoKubernetes/serial/Start 7.08
384 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
385 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
386 TestNoKubernetes/serial/ProfileList 16.19
387 TestNoKubernetes/serial/Stop 1.29
388 TestNoKubernetes/serial/StartNoArgs 7.21
389 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
397 TestNetworkPlugins/group/auto/Start 42.61
398 TestNetworkPlugins/group/auto/KubeletFlags 0.31
399 TestNetworkPlugins/group/auto/NetCatPod 9.22
400 TestNetworkPlugins/group/auto/DNS 0.11
401 TestNetworkPlugins/group/auto/Localhost 0.09
402 TestNetworkPlugins/group/auto/HairPin 0.09
403 TestNetworkPlugins/group/kindnet/Start 42.81
404 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
405 TestNetworkPlugins/group/calico/Start 55.4
406 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
407 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
408 TestNetworkPlugins/group/kindnet/DNS 0.13
409 TestNetworkPlugins/group/kindnet/Localhost 0.11
410 TestNetworkPlugins/group/kindnet/HairPin 0.1
411 TestNetworkPlugins/group/custom-flannel/Start 58.45
412 TestNetworkPlugins/group/enable-default-cni/Start 69.74
413 TestNetworkPlugins/group/flannel/Start 55.57
414 TestNetworkPlugins/group/calico/ControllerPod 6.01
415 TestNetworkPlugins/group/calico/KubeletFlags 0.38
416 TestNetworkPlugins/group/calico/NetCatPod 10.31
417 TestNetworkPlugins/group/calico/DNS 0.21
418 TestNetworkPlugins/group/calico/Localhost 0.17
419 TestNetworkPlugins/group/calico/HairPin 0.24
420 TestNetworkPlugins/group/bridge/Start 41.1
421 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
422 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.21
423 TestNetworkPlugins/group/flannel/ControllerPod 6.01
424 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
425 TestNetworkPlugins/group/custom-flannel/DNS 0.17
426 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
427 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
428 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.24
429 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
430 TestNetworkPlugins/group/flannel/NetCatPod 8.27
431 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
432 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
433 TestNetworkPlugins/group/flannel/DNS 0.21
434 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
435 TestNetworkPlugins/group/flannel/Localhost 0.13
436 TestNetworkPlugins/group/flannel/HairPin 0.2
438 TestStartStop/group/old-k8s-version/serial/FirstStart 58.57
439 TestNetworkPlugins/group/bridge/KubeletFlags 0.61
440 TestNetworkPlugins/group/bridge/NetCatPod 11.02
442 TestStartStop/group/no-preload/serial/FirstStart 61.93
444 TestStartStop/group/embed-certs/serial/FirstStart 49.55
445 TestNetworkPlugins/group/bridge/DNS 0.17
446 TestNetworkPlugins/group/bridge/Localhost 0.13
447 TestNetworkPlugins/group/bridge/HairPin 0.11
449 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.41
450 TestStartStop/group/old-k8s-version/serial/DeployApp 9.29
451 TestStartStop/group/embed-certs/serial/DeployApp 8.3
454 TestStartStop/group/old-k8s-version/serial/Stop 16.15
455 TestStartStop/group/embed-certs/serial/Stop 16.34
456 TestStartStop/group/no-preload/serial/DeployApp 9.26
458 TestStartStop/group/no-preload/serial/Stop 16.4
459 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.25
460 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
461 TestStartStop/group/old-k8s-version/serial/SecondStart 47.38
462 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
463 TestStartStop/group/embed-certs/serial/SecondStart 45.24
465 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.62
466 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
467 TestStartStop/group/no-preload/serial/SecondStart 45.24
468 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
469 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.26
470 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
471 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
472 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
473 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
474 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
476 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
477 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
479 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
481 TestStartStop/group/newest-cni/serial/FirstStart 25.27
482 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
484 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
485 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
486 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
488 TestStartStop/group/newest-cni/serial/DeployApp 0
490 TestStartStop/group/newest-cni/serial/Stop 17.95
491 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
492 TestStartStop/group/newest-cni/serial/SecondStart 10.12
493 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
494 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
495 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (13.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-635623 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-635623 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.417935038s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 07:49:38.399045  556055 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1217 07:49:38.399139  556055 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-635623
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-635623: exit status 85 (89.905707ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-635623 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-635623 │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 07:49:25
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 07:49:25.036860  556068 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:49:25.036982  556068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:49:25.036994  556068 out.go:374] Setting ErrFile to fd 2...
	I1217 07:49:25.037001  556068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:49:25.037196  556068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	W1217 07:49:25.037328  556068 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22182-552461/.minikube/config/config.json: open /home/jenkins/minikube-integration/22182-552461/.minikube/config/config.json: no such file or directory
	I1217 07:49:25.038000  556068 out.go:368] Setting JSON to true
	I1217 07:49:25.039050  556068 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5510,"bootTime":1765952255,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 07:49:25.039113  556068 start.go:143] virtualization: kvm guest
	I1217 07:49:25.043958  556068 out.go:99] [download-only-635623] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1217 07:49:25.044172  556068 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 07:49:25.044164  556068 notify.go:221] Checking for updates...
	I1217 07:49:25.046243  556068 out.go:171] MINIKUBE_LOCATION=22182
	I1217 07:49:25.048155  556068 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 07:49:25.049713  556068 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 07:49:25.051209  556068 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 07:49:25.052647  556068 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 07:49:25.055204  556068 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 07:49:25.055463  556068 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 07:49:25.080599  556068 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 07:49:25.080736  556068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:49:25.138726  556068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-17 07:49:25.127445342 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:49:25.138830  556068 docker.go:319] overlay module found
	I1217 07:49:25.140815  556068 out.go:99] Using the docker driver based on user configuration
	I1217 07:49:25.140863  556068 start.go:309] selected driver: docker
	I1217 07:49:25.140872  556068 start.go:927] validating driver "docker" against <nil>
	I1217 07:49:25.140987  556068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:49:25.199167  556068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-17 07:49:25.18929228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:49:25.199347  556068 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 07:49:25.199902  556068 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 07:49:25.200044  556068 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 07:49:25.201942  556068 out.go:171] Using Docker driver with root privileges
	I1217 07:49:25.203356  556068 cni.go:84] Creating CNI manager for ""
	I1217 07:49:25.203423  556068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 07:49:25.203434  556068 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 07:49:25.203506  556068 start.go:353] cluster config:
	{Name:download-only-635623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-635623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 07:49:25.205101  556068 out.go:99] Starting "download-only-635623" primary control-plane node in "download-only-635623" cluster
	I1217 07:49:25.205122  556068 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 07:49:25.206556  556068 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 07:49:25.206594  556068 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 07:49:25.206700  556068 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 07:49:25.224048  556068 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 07:49:25.224265  556068 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 07:49:25.224368  556068 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 07:49:25.592713  556068 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 07:49:25.592764  556068 cache.go:65] Caching tarball of preloaded images
	I1217 07:49:25.592944  556068 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 07:49:25.595122  556068 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 07:49:25.595156  556068 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 07:49:25.700814  556068 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1217 07:49:25.700937  556068 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 07:49:30.017738  556068 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	
	
	* The control-plane node download-only-635623 host does not exist
	  To start a cluster, run: "minikube start -p download-only-635623"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-635623
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (10.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-284037 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-284037 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.062193971s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (10.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1217 07:49:48.989980  556055 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 07:49:48.990021  556055 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-284037
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-284037: exit status 85 (78.554096ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-635623 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-635623 │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │ 17 Dec 25 07:49 UTC │
	│ delete  │ -p download-only-635623                                                                                                                                                   │ download-only-635623 │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │ 17 Dec 25 07:49 UTC │
	│ start   │ -o=json --download-only -p download-only-284037 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-284037 │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 07:49:38
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 07:49:38.987395  556450 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:49:38.987711  556450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:49:38.987721  556450 out.go:374] Setting ErrFile to fd 2...
	I1217 07:49:38.987726  556450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:49:38.987933  556450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:49:38.988439  556450 out.go:368] Setting JSON to true
	I1217 07:49:38.989501  556450 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5524,"bootTime":1765952255,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 07:49:38.989595  556450 start.go:143] virtualization: kvm guest
	I1217 07:49:38.992144  556450 out.go:99] [download-only-284037] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 07:49:38.992359  556450 notify.go:221] Checking for updates...
	I1217 07:49:38.994413  556450 out.go:171] MINIKUBE_LOCATION=22182
	I1217 07:49:38.996667  556450 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 07:49:38.998848  556450 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 07:49:39.001151  556450 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 07:49:39.003056  556450 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 07:49:39.006507  556450 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 07:49:39.006838  556450 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 07:49:39.033641  556450 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 07:49:39.033799  556450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:49:39.097410  556450 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 07:49:39.085232034 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:49:39.097550  556450 docker.go:319] overlay module found
	I1217 07:49:39.099990  556450 out.go:99] Using the docker driver based on user configuration
	I1217 07:49:39.100054  556450 start.go:309] selected driver: docker
	I1217 07:49:39.100062  556450 start.go:927] validating driver "docker" against <nil>
	I1217 07:49:39.100160  556450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:49:39.159893  556450 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 07:49:39.148215201 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:49:39.160079  556450 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 07:49:39.160681  556450 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 07:49:39.160831  556450 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 07:49:39.163455  556450 out.go:171] Using Docker driver with root privileges
	I1217 07:49:39.165412  556450 cni.go:84] Creating CNI manager for ""
	I1217 07:49:39.165491  556450 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 07:49:39.165505  556450 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 07:49:39.165627  556450 start.go:353] cluster config:
	{Name:download-only-284037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-284037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 07:49:39.167617  556450 out.go:99] Starting "download-only-284037" primary control-plane node in "download-only-284037" cluster
	I1217 07:49:39.167651  556450 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 07:49:39.169424  556450 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 07:49:39.169482  556450 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 07:49:39.169621  556450 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 07:49:39.188895  556450 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 07:49:39.189133  556450 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 07:49:39.189162  556450 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 07:49:39.189167  556450 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 07:49:39.189176  556450 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 07:49:39.544218  556450 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 07:49:39.544272  556450 cache.go:65] Caching tarball of preloaded images
	I1217 07:49:39.544486  556450 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 07:49:39.546706  556450 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1217 07:49:39.546740  556450 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 07:49:39.656155  556450 preload.go:295] Got checksum from GCS API "fdea575627999e8631bb8fa579d884c7"
	I1217 07:49:39.656209  556450 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:fdea575627999e8631bb8fa579d884c7 -> /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-284037 host does not exist
	  To start a cluster, run: "minikube start -p download-only-284037"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-284037
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (17.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-505037 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-505037 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (17.235614317s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (17.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1217 07:50:06.696583  556055 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1217 07:50:06.696637  556055 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-505037
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-505037: exit status 85 (81.963969ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-635623 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-635623 │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │ 17 Dec 25 07:49 UTC │
	│ delete  │ -p download-only-635623                                                                                                                                                        │ download-only-635623 │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │ 17 Dec 25 07:49 UTC │
	│ start   │ -o=json --download-only -p download-only-284037 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-284037 │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │ 17 Dec 25 07:49 UTC │
	│ delete  │ -p download-only-284037                                                                                                                                                        │ download-only-284037 │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │ 17 Dec 25 07:49 UTC │
	│ start   │ -o=json --download-only -p download-only-505037 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-505037 │ jenkins │ v1.37.0 │ 17 Dec 25 07:49 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 07:49:49
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 07:49:49.518348  556817 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:49:49.518650  556817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:49:49.518668  556817 out.go:374] Setting ErrFile to fd 2...
	I1217 07:49:49.518673  556817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:49:49.519069  556817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:49:49.519703  556817 out.go:368] Setting JSON to true
	I1217 07:49:49.520692  556817 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5534,"bootTime":1765952255,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 07:49:49.520775  556817 start.go:143] virtualization: kvm guest
	I1217 07:49:49.524021  556817 out.go:99] [download-only-505037] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 07:49:49.524254  556817 notify.go:221] Checking for updates...
	I1217 07:49:49.526596  556817 out.go:171] MINIKUBE_LOCATION=22182
	I1217 07:49:49.529824  556817 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 07:49:49.532622  556817 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 07:49:49.534737  556817 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 07:49:49.537664  556817 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 07:49:49.541269  556817 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 07:49:49.541620  556817 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 07:49:49.566672  556817 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 07:49:49.566770  556817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:49:49.631965  556817 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 07:49:49.619817648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:49:49.632087  556817 docker.go:319] overlay module found
	I1217 07:49:49.634381  556817 out.go:99] Using the docker driver based on user configuration
	I1217 07:49:49.634441  556817 start.go:309] selected driver: docker
	I1217 07:49:49.634451  556817 start.go:927] validating driver "docker" against <nil>
	I1217 07:49:49.634631  556817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:49:49.692930  556817 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 07:49:49.682649127 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:49:49.693228  556817 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 07:49:49.693766  556817 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 07:49:49.693930  556817 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 07:49:49.696189  556817 out.go:171] Using Docker driver with root privileges
	I1217 07:49:49.697804  556817 cni.go:84] Creating CNI manager for ""
	I1217 07:49:49.697894  556817 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 07:49:49.697911  556817 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 07:49:49.698003  556817 start.go:353] cluster config:
	{Name:download-only-505037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-505037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 07:49:49.699648  556817 out.go:99] Starting "download-only-505037" primary control-plane node in "download-only-505037" cluster
	I1217 07:49:49.699681  556817 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 07:49:49.701493  556817 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 07:49:49.701576  556817 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 07:49:49.701708  556817 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 07:49:49.720685  556817 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 07:49:49.720859  556817 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 07:49:49.720880  556817 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 07:49:49.720885  556817 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 07:49:49.720893  556817 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 07:49:50.066480  556817 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 07:49:50.066564  556817 cache.go:65] Caching tarball of preloaded images
	I1217 07:49:50.066781  556817 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 07:49:50.068972  556817 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1217 07:49:50.069004  556817 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 07:49:50.167424  556817 preload.go:295] Got checksum from GCS API "46a82b10f18f180acaede5af8ca381a9"
	I1217 07:49:50.167487  556817 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:46a82b10f18f180acaede5af8ca381a9 -> /home/jenkins/minikube-integration/22182-552461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 07:49:59.006922  556817 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 07:49:59.007434  556817 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/download-only-505037/config.json ...
	I1217 07:49:59.007476  556817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/download-only-505037/config.json: {Name:mk6a2955731d0adb5049c68fe6a942467c593caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 07:49:59.007720  556817 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 07:49:59.007903  556817 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22182-552461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubectl
	
	
	* The control-plane node download-only-505037 host does not exist
	  To start a cluster, run: "minikube start -p download-only-505037"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-505037
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-300295 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-300295" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-300295
--- PASS: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestBinaryMirror (0.91s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 07:50:08.104414  556055 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-777344 --alsologtostderr --binary-mirror http://127.0.0.1:43583 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-777344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-777344
--- PASS: TestBinaryMirror (0.91s)

                                                
                                    
x
+
TestOffline (66.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-077569 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-077569 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m3.533112853s)
helpers_test.go:176: Cleaning up "offline-crio-077569" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-077569
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-077569: (2.705084957s)
--- PASS: TestOffline (66.24s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-910958
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-910958: exit status 85 (72.877709ms)

                                                
                                                
-- stdout --
	* Profile "addons-910958" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-910958"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-910958
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-910958: exit status 85 (72.725195ms)

                                                
                                                
-- stdout --
	* Profile "addons-910958" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-910958"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (104.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-910958 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-910958 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m44.280030866s)
--- PASS: TestAddons/Setup (104.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-910958 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-910958 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-910958 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-910958 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [65e19e6e-a12c-411f-b533-a578e1a367ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [65e19e6e-a12c-411f-b533-a578e1a367ef] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004358907s
addons_test.go:696: (dbg) Run:  kubectl --context addons-910958 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-910958 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-910958 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.63s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-910958
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-910958: (18.314571507s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-910958
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-910958
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-910958
--- PASS: TestAddons/StoppedEnableDisable (18.63s)

                                                
                                    
x
+
TestCertOptions (26.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-730999 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1217 08:26:54.516720  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-730999 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.640579414s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-730999 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-730999 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-730999 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-730999" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-730999
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-730999: (2.498609996s)
--- PASS: TestCertOptions (26.82s)

                                                
                                    
x
+
TestCertExpiration (209.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-993423 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-993423 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.179207833s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-993423 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-993423 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.683611108s)
helpers_test.go:176: Cleaning up "cert-expiration-993423" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-993423
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-993423: (2.591928503s)
--- PASS: TestCertExpiration (209.46s)

                                                
                                    
x
+
TestForceSystemdFlag (25.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-635474 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-635474 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.972087547s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-635474 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-635474" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-635474
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-635474: (2.490667416s)
--- PASS: TestForceSystemdFlag (25.79s)

                                                
                                    
x
+
TestForceSystemdEnv (29.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-819594 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-819594 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.635757357s)
helpers_test.go:176: Cleaning up "force-systemd-env-819594" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-819594
E1217 08:25:27.226446  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-819594: (2.478636047s)
--- PASS: TestForceSystemdEnv (29.11s)

                                                
                                    
x
+
TestErrorSpam/setup (22.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-747641 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-747641 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-747641 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-747641 --driver=docker  --container-runtime=crio: (22.474120151s)
--- PASS: TestErrorSpam/setup (22.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (6.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 pause: exit status 80 (2.243128857s)

                                                
                                                
-- stdout --
	* Pausing node nospam-747641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:55:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 pause: exit status 80 (2.129563173s)

                                                
                                                
-- stdout --
	* Pausing node nospam-747641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:55:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 pause: exit status 80 (2.123447456s)

                                                
                                                
-- stdout --
	* Pausing node nospam-747641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:55:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.96s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 unpause: exit status 80 (2.120342439s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-747641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:55:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 unpause: exit status 80 (1.882387949s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-747641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:55:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 unpause: exit status 80 (1.955838956s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-747641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T07:55:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.96s)

                                                
                                    
x
+
TestErrorSpam/stop (12.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 stop: (12.364543232s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-747641 --log_dir /tmp/nospam-747641 stop
--- PASS: TestErrorSpam/stop (12.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/test/nested/copy/556055/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-981680 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-981680 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (42.942105418s)
--- PASS: TestFunctional/serial/StartWithProxy (42.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 07:56:36.238470  556055 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-981680 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-981680 --alsologtostderr -v=8: (6.351021781s)
functional_test.go:678: soft start took 6.353560477s for "functional-981680" cluster.
I1217 07:56:42.591320  556055 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (6.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-981680 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-981680 /tmp/TestFunctionalserialCacheCmdcacheadd_local3580201109/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 cache add minikube-local-cache-test:functional-981680
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-981680 cache add minikube-local-cache-test:functional-981680: (1.730995202s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 cache delete minikube-local-cache-test:functional-981680
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-981680
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-981680 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.113018ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 kubectl -- --context functional-981680 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-981680 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-981680 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 07:56:54.516657  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:56:54.523120  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:56:54.534589  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:56:54.556091  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:56:54.597637  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:56:54.679132  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:56:54.840811  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:56:55.162616  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:56:55.804713  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:56:57.086352  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:56:59.649339  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:57:04.770863  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 07:57:15.012342  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-981680 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.295431237s)
functional_test.go:776: restart took 39.295572112s for "functional-981680" cluster.
I1217 07:57:29.169677  556055 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (39.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-981680 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-981680 logs: (1.293227804s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 logs --file /tmp/TestFunctionalserialLogsFileCmd3569488067/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-981680 logs --file /tmp/TestFunctionalserialLogsFileCmd3569488067/001/logs.txt: (1.332244377s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-981680 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-981680
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-981680: exit status 115 (365.055136ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32553 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-981680 delete -f testdata/invalidsvc.yaml
E1217 07:57:35.494685  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2332: (dbg) Done: kubectl --context functional-981680 delete -f testdata/invalidsvc.yaml: (1.936913646s)
--- PASS: TestFunctional/serial/InvalidService (5.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-981680 config get cpus: exit status 14 (98.452992ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-981680 config get cpus: exit status 14 (92.128425ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-981680 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-981680 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 593941: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-981680 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-981680 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (213.019043ms)

                                                
                                                
-- stdout --
	* [functional-981680] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:57:57.738682  593257 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:57:57.738985  593257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:57:57.738997  593257 out.go:374] Setting ErrFile to fd 2...
	I1217 07:57:57.739003  593257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:57:57.739224  593257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:57:57.739768  593257 out.go:368] Setting JSON to false
	I1217 07:57:57.741085  593257 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6023,"bootTime":1765952255,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 07:57:57.741160  593257 start.go:143] virtualization: kvm guest
	I1217 07:57:57.745852  593257 out.go:179] * [functional-981680] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 07:57:57.747251  593257 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 07:57:57.747267  593257 notify.go:221] Checking for updates...
	I1217 07:57:57.750938  593257 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 07:57:57.752705  593257 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 07:57:57.754044  593257 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 07:57:57.755225  593257 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 07:57:57.756661  593257 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 07:57:57.758593  593257 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:57:57.759409  593257 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 07:57:57.791661  593257 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 07:57:57.791787  593257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:57:57.855483  593257 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-12-17 07:57:57.845238287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:57:57.855669  593257 docker.go:319] overlay module found
	I1217 07:57:57.861822  593257 out.go:179] * Using the docker driver based on existing profile
	I1217 07:57:57.863431  593257 start.go:309] selected driver: docker
	I1217 07:57:57.863451  593257 start.go:927] validating driver "docker" against &{Name:functional-981680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-981680 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 07:57:57.863572  593257 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 07:57:57.865547  593257 out.go:203] 
	W1217 07:57:57.867276  593257 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 07:57:57.869180  593257 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-981680 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-981680 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-981680 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (185.280639ms)

                                                
                                                
-- stdout --
	* [functional-981680] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 07:57:58.188828  593519 out.go:360] Setting OutFile to fd 1 ...
	I1217 07:57:58.189116  593519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:57:58.189128  593519 out.go:374] Setting ErrFile to fd 2...
	I1217 07:57:58.189132  593519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 07:57:58.189454  593519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 07:57:58.189923  593519 out.go:368] Setting JSON to false
	I1217 07:57:58.191059  593519 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6023,"bootTime":1765952255,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 07:57:58.191122  593519 start.go:143] virtualization: kvm guest
	I1217 07:57:58.193449  593519 out.go:179] * [functional-981680] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 07:57:58.194770  593519 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 07:57:58.194877  593519 notify.go:221] Checking for updates...
	I1217 07:57:58.198244  593519 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 07:57:58.199581  593519 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 07:57:58.200785  593519 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 07:57:58.202232  593519 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 07:57:58.203719  593519 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 07:57:58.206061  593519 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 07:57:58.206849  593519 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 07:57:58.238464  593519 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 07:57:58.238663  593519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 07:57:58.302080  593519 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-17 07:57:58.291832595 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 07:57:58.302219  593519 docker.go:319] overlay module found
	I1217 07:57:58.304122  593519 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 07:57:58.305167  593519 start.go:309] selected driver: docker
	I1217 07:57:58.305183  593519 start.go:927] validating driver "docker" against &{Name:functional-981680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-981680 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 07:57:58.305276  593519 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 07:57:58.307003  593519 out.go:203] 
	W1217 07:57:58.308139  593519 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 07:57:58.309250  593519 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-981680 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-981680 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-82dng" [1dce6be0-ba99-4e84-9521-ad1128a35f18] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-82dng" [1dce6be0-ba99-4e84-9521-ad1128a35f18] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004156998s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32023
functional_test.go:1680: http://192.168.49.2:32023: success! body:
Request served by hello-node-connect-7d85dfc575-82dng

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32023
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [49dbec15-ab8c-40be-9bf4-0eaf7be4a9f7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004499574s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-981680 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-981680 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-981680 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-981680 apply -f testdata/storage-provisioner/pod.yaml
I1217 07:57:43.771551  556055 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7f10d528-d1c6-4230-bd46-0b6ee303f67e] Pending
helpers_test.go:353: "sp-pod" [7f10d528-d1c6-4230-bd46-0b6ee303f67e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003564893s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-981680 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-981680 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-981680 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [9ee3e632-15c4-4815-8d96-10411c33866a] Pending
helpers_test.go:353: "sp-pod" [9ee3e632-15c4-4815-8d96-10411c33866a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00378224s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-981680 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.27s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh -n functional-981680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 cp functional-981680:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1179394203/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh -n functional-981680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh -n functional-981680 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-981680 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-t7twq" [547680a5-138d-4f1c-a922-6ae3c2a548c3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-t7twq" [547680a5-138d-4f1c-a922-6ae3c2a548c3] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.005499686s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;": exit status 1 (137.371368ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 07:58:03.962382  556055 retry.go:31] will retry after 812.346137ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;": exit status 1 (149.19565ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 07:58:04.924337  556055 retry.go:31] will retry after 1.638431257s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;": exit status 1 (213.305095ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 07:58:06.777354  556055 retry.go:31] will retry after 1.246155755s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;": exit status 1 (100.622253ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 07:58:08.125230  556055 retry.go:31] will retry after 2.454122309s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;": exit status 1 (94.422986ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 07:58:10.674728  556055 retry.go:31] will retry after 5.710665122s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-981680 exec mysql-6bcdcbc558-t7twq -- mysql -ppassword -e "show databases;"
E1217 07:58:16.457018  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (27.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/556055/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo cat /etc/test/nested/copy/556055/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/556055.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo cat /etc/ssl/certs/556055.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/556055.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo cat /usr/share/ca-certificates/556055.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5560552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo cat /etc/ssl/certs/5560552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5560552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo cat /usr/share/ca-certificates/5560552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-981680 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-981680 ssh "sudo systemctl is-active docker": exit status 1 (297.715653ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-981680 ssh "sudo systemctl is-active containerd": exit status 1 (293.207454ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-981680 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-981680 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-b5lht" [940f62e7-cd90-4a11-b8fb-844dede1a614] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-b5lht" [940f62e7-cd90-4a11-b8fb-844dede1a614] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003545192s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-981680 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-981680 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-981680 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-981680 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 588312: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-981680 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-981680 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [a8c25b8b-34c4-449e-863c-51238952dd67] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [a8c25b8b-34c4-449e-863c-51238952dd67] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004376216s
I1217 07:57:47.566991  556055 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 service list -o json
functional_test.go:1504: Took "504.196121ms" to run "out/minikube-linux-amd64 -p functional-981680 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32547
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32547
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-981680 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.32.199 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-981680 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-981680 image ls --format short --alsologtostderr: (1.5787858s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-981680 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-981680
localhost/kicbase/echo-server:functional-981680
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-981680 image ls --format short --alsologtostderr:
I1217 07:58:03.639229  595134 out.go:360] Setting OutFile to fd 1 ...
I1217 07:58:03.639554  595134 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 07:58:03.639569  595134 out.go:374] Setting ErrFile to fd 2...
I1217 07:58:03.639577  595134 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 07:58:03.639837  595134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
I1217 07:58:03.640437  595134 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 07:58:03.640527  595134 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 07:58:03.641059  595134 cli_runner.go:164] Run: docker container inspect functional-981680 --format={{.State.Status}}
I1217 07:58:03.660942  595134 ssh_runner.go:195] Run: systemctl --version
I1217 07:58:03.661004  595134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-981680
I1217 07:58:03.681143  595134 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/functional-981680/id_ed25519 Username:docker}
I1217 07:58:03.780566  595134 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 07:58:05.149054  595134 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.368448131s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-981680 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-981680                     │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-proxy              │ v1.34.3                               │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-981680                     │ b158322e9d6ff │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.3                               │ aa27095f56193 │ 89.1MB │
│ registry.k8s.io/pause                   │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1                               │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0                               │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.3                               │ 5826b25d990d7 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.3                               │ aec12dadf56dd │ 53.9MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                                   │ 20d0be4ee4524 │ 804MB  │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ a236f84b9d5d2 │ 55.2MB │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-981680 image ls --format table --alsologtostderr:
I1217 07:58:05.499508  595793 out.go:360] Setting OutFile to fd 1 ...
I1217 07:58:05.499648  595793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 07:58:05.499657  595793 out.go:374] Setting ErrFile to fd 2...
I1217 07:58:05.499662  595793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 07:58:05.499881  595793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
I1217 07:58:05.500483  595793 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 07:58:05.500588  595793 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 07:58:05.501064  595793 cli_runner.go:164] Run: docker container inspect functional-981680 --format={{.State.Status}}
I1217 07:58:05.524086  595793 ssh_runner.go:195] Run: systemctl --version
I1217 07:58:05.524162  595793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-981680
I1217 07:58:05.548626  595793 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/functional-981680/id_ed25519 Username:docker}
I1217 07:58:05.647768  595793 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-981680 image ls --format json --alsologtostderr:
[{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"53853013"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2
fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"b158322e9d6ffe0a4c9d95ec9dbc367c61b122be0019b526b7cafa4fa245eef7","repoDigests":["localhost/minikube-local-cache-test@sha256:6522116068c20da7671947b36292f5e002943fa7ace3f429b158e370175d657e"],"repoTags":["localhost/minikube-local-cache-test:functional-981680"],"size":"3330"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"},{"id":
"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256
:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-981680"],"size":"4944818"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"
size":"803724943"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5
a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8
s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-981680 image ls --format json --alsologtostderr:
I1217 07:58:05.232327  595718 out.go:360] Setting OutFile to fd 1 ...
I1217 07:58:05.232639  595718 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 07:58:05.232655  595718 out.go:374] Setting ErrFile to fd 2...
I1217 07:58:05.232663  595718 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 07:58:05.232993  595718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
I1217 07:58:05.233852  595718 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 07:58:05.233999  595718 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 07:58:05.234717  595718 cli_runner.go:164] Run: docker container inspect functional-981680 --format={{.State.Status}}
I1217 07:58:05.257739  595718 ssh_runner.go:195] Run: systemctl --version
I1217 07:58:05.257804  595718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-981680
I1217 07:58:05.281420  595718 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/functional-981680/id_ed25519 Username:docker}
I1217 07:58:05.376458  595718 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-981680 image ls --format yaml --alsologtostderr:
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: b158322e9d6ffe0a4c9d95ec9dbc367c61b122be0019b526b7cafa4fa245eef7
repoDigests:
- localhost/minikube-local-cache-test@sha256:6522116068c20da7671947b36292f5e002943fa7ace3f429b158e370175d657e
repoTags:
- localhost/minikube-local-cache-test:functional-981680
size: "3330"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-981680
size: "4944818"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-981680 image ls --format yaml --alsologtostderr:
I1217 07:58:05.757735  595943 out.go:360] Setting OutFile to fd 1 ...
I1217 07:58:05.758049  595943 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 07:58:05.758065  595943 out.go:374] Setting ErrFile to fd 2...
I1217 07:58:05.758073  595943 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 07:58:05.758395  595943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
I1217 07:58:05.759263  595943 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 07:58:05.759421  595943 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 07:58:05.760088  595943 cli_runner.go:164] Run: docker container inspect functional-981680 --format={{.State.Status}}
I1217 07:58:05.781805  595943 ssh_runner.go:195] Run: systemctl --version
I1217 07:58:05.781875  595943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-981680
I1217 07:58:05.801387  595943 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/functional-981680/id_ed25519 Username:docker}
I1217 07:58:05.900182  595943 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-981680 ssh pgrep buildkitd: exit status 1 (301.978701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image build -t localhost/my-image:functional-981680 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-981680 image build -t localhost/my-image:functional-981680 testdata/build --alsologtostderr: (3.351910341s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-981680 image build -t localhost/my-image:functional-981680 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 24e3d87e51a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-981680
--> b00bb83180a
Successfully tagged localhost/my-image:functional-981680
b00bb83180a9b12e4b65acb10eea09f0e32e42f94a43b813d1c0ea6f4cd909b8
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-981680 image build -t localhost/my-image:functional-981680 testdata/build --alsologtostderr:
I1217 07:58:06.314056  596285 out.go:360] Setting OutFile to fd 1 ...
I1217 07:58:06.314175  596285 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 07:58:06.314190  596285 out.go:374] Setting ErrFile to fd 2...
I1217 07:58:06.314196  596285 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 07:58:06.314451  596285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
I1217 07:58:06.315096  596285 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 07:58:06.315957  596285 config.go:182] Loaded profile config "functional-981680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 07:58:06.316445  596285 cli_runner.go:164] Run: docker container inspect functional-981680 --format={{.State.Status}}
I1217 07:58:06.336907  596285 ssh_runner.go:195] Run: systemctl --version
I1217 07:58:06.336954  596285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-981680
I1217 07:58:06.357726  596285 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/functional-981680/id_ed25519 Username:docker}
I1217 07:58:06.452902  596285 build_images.go:162] Building image from path: /tmp/build.1952290828.tar
I1217 07:58:06.452982  596285 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 07:58:06.463501  596285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1952290828.tar
I1217 07:58:06.468610  596285 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1952290828.tar: stat -c "%s %y" /var/lib/minikube/build/build.1952290828.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1952290828.tar': No such file or directory
I1217 07:58:06.468641  596285 ssh_runner.go:362] scp /tmp/build.1952290828.tar --> /var/lib/minikube/build/build.1952290828.tar (3072 bytes)
I1217 07:58:06.489230  596285 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1952290828
I1217 07:58:06.498427  596285 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1952290828 -xf /var/lib/minikube/build/build.1952290828.tar
I1217 07:58:06.508174  596285 crio.go:315] Building image: /var/lib/minikube/build/build.1952290828
I1217 07:58:06.508258  596285 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-981680 /var/lib/minikube/build/build.1952290828 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 07:58:09.568401  596285 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-981680 /var/lib/minikube/build/build.1952290828 --cgroup-manager=cgroupfs: (3.06009551s)
I1217 07:58:09.568471  596285 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1952290828
I1217 07:58:09.578120  596285 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1952290828.tar
I1217 07:58:09.587293  596285 build_images.go:218] Built localhost/my-image:functional-981680 from /tmp/build.1952290828.tar
I1217 07:58:09.587359  596285 build_images.go:134] succeeded building to: functional-981680
I1217 07:58:09.587367  596285 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.893219423s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-981680
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
I1217 07:57:50.545737  556055 detect.go:223] nested VM detected
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "502.999628ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "96.065874ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "498.17527ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "94.965353ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdany-port2851537417/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765958272176510538" to /tmp/TestFunctionalparallelMountCmdany-port2851537417/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765958272176510538" to /tmp/TestFunctionalparallelMountCmdany-port2851537417/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765958272176510538" to /tmp/TestFunctionalparallelMountCmdany-port2851537417/001/test-1765958272176510538
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-981680 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (373.333836ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 07:57:52.550441  556055 retry.go:31] will retry after 403.460071ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 07:57 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 07:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 07:57 test-1765958272176510538
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh cat /mount-9p/test-1765958272176510538
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-981680 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [fcc95d0f-afe9-4885-a7b7-b88595267750] Pending
helpers_test.go:353: "busybox-mount" [fcc95d0f-afe9-4885-a7b7-b88595267750] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [fcc95d0f-afe9-4885-a7b7-b88595267750] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [fcc95d0f-afe9-4885-a7b7-b88595267750] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.00507824s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-981680 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdany-port2851537417/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image load --daemon kicbase/echo-server:functional-981680 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-981680 image load --daemon kicbase/echo-server:functional-981680 --alsologtostderr: (1.304790564s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-981680
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image load --daemon kicbase/echo-server:functional-981680 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image save kicbase/echo-server:functional-981680 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image rm kicbase/echo-server:functional-981680 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-981680
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 image save --daemon kicbase/echo-server:functional-981680 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-981680
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdspecific-port3499221795/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-981680 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.774861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 07:58:04.911514  556055 retry.go:31] will retry after 477.205684ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdspecific-port3499221795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-981680 ssh "sudo umount -f /mount-9p": exit status 1 (304.018571ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-981680 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdspecific-port3499221795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1943616431/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1943616431/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1943616431/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "findmnt -T" /mount2
2025/12/17 07:58:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-981680 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-981680 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1943616431/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1943616431/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-981680 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1943616431/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-981680
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-981680
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-981680
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22182-552461/.minikube/files/etc/test/nested/copy/556055/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (41.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819971 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-819971 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (41.121948743s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (41.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1217 07:59:00.955053  556055 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819971 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-819971 --alsologtostderr -v=8: (6.861383408s)
functional_test.go:678: soft start took 6.861756374s for "functional-819971" cluster.
I1217 07:59:07.816847  556055 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-819971 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 cache add registry.k8s.io/pause:3.3: (1.017731871s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC2334303155/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 cache add minikube-local-cache-test:functional-819971
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 cache add minikube-local-cache-test:functional-819971: (1.760084947s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 cache delete minikube-local-cache-test:functional-819971
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-819971
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819971 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (320.735407ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 kubectl -- --context functional-819971 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-819971 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (62.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819971 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 07:59:38.378783  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-819971 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.737427133s)
functional_test.go:776: restart took 1m2.737619613s for "functional-819971" cluster.
I1217 08:00:18.261902  556055 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (62.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-819971 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 logs: (1.338621754s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi3370204192/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi3370204192/001/logs.txt: (1.351912458s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (5.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-819971 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-819971
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-819971: exit status 115 (378.32182ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30944 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-819971 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-819971 delete -f testdata/invalidsvc.yaml: (1.784736364s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (5.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819971 config get cpus: exit status 14 (86.201932ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819971 config get cpus: exit status 14 (90.233483ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (12.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-819971 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-819971 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 612363: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (12.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819971 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-819971 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (258.448887ms)

                                                
                                                
-- stdout --
	* [functional-819971] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:00:58.126776  613010 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:00:58.127099  613010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:00:58.127114  613010 out.go:374] Setting ErrFile to fd 2...
	I1217 08:00:58.127120  613010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:00:58.127415  613010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:00:58.128088  613010 out.go:368] Setting JSON to false
	I1217 08:00:58.129585  613010 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6203,"bootTime":1765952255,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:00:58.129669  613010 start.go:143] virtualization: kvm guest
	I1217 08:00:58.131161  613010 out.go:179] * [functional-819971] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:00:58.133162  613010 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:00:58.133179  613010 notify.go:221] Checking for updates...
	I1217 08:00:58.135670  613010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:00:58.138174  613010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:00:58.139997  613010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:00:58.141729  613010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:00:58.147064  613010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:00:58.149960  613010 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:00:58.150737  613010 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:00:58.183284  613010 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:00:58.183455  613010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:00:58.255098  613010 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 08:00:58.240938536 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:00:58.255270  613010 docker.go:319] overlay module found
	I1217 08:00:58.257973  613010 out.go:179] * Using the docker driver based on existing profile
	I1217 08:00:58.260380  613010 start.go:309] selected driver: docker
	I1217 08:00:58.260406  613010 start.go:927] validating driver "docker" against &{Name:functional-819971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-819971 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:00:58.260552  613010 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:00:58.262782  613010 out.go:203] 
	W1217 08:00:58.264314  613010 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 08:00:58.265721  613010 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819971 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819971 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-819971 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (263.530461ms)

                                                
                                                
-- stdout --
	* [functional-819971] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:00:57.826912  612818 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:00:57.827031  612818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:00:57.827042  612818 out.go:374] Setting ErrFile to fd 2...
	I1217 08:00:57.827048  612818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:00:57.827481  612818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:00:57.828101  612818 out.go:368] Setting JSON to false
	I1217 08:00:57.829189  612818 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6203,"bootTime":1765952255,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:00:57.829275  612818 start.go:143] virtualization: kvm guest
	I1217 08:00:57.832175  612818 out.go:179] * [functional-819971] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 08:00:57.834673  612818 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:00:57.834691  612818 notify.go:221] Checking for updates...
	I1217 08:00:57.837927  612818 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:00:57.839563  612818 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:00:57.841161  612818 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:00:57.842789  612818 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:00:57.844412  612818 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:00:57.846589  612818 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:00:57.847331  612818 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:00:57.885411  612818 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:00:57.885660  612818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:00:57.980149  612818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 08:00:57.962602528 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:00:57.980256  612818 docker.go:319] overlay module found
	I1217 08:00:57.983570  612818 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 08:00:57.986126  612818 start.go:309] selected driver: docker
	I1217 08:00:57.986155  612818 start.go:927] validating driver "docker" against &{Name:functional-819971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-819971 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:00:57.986382  612818 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:00:57.988709  612818 out.go:203] 
	W1217 08:00:57.994305  612818 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 08:00:57.995952  612818 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (11.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-819971 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-819971 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-rfpvh" [9df9e358-fc80-401d-b60d-2d6909a6bbb3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-rfpvh" [9df9e358-fc80-401d-b60d-2d6909a6bbb3] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.00417754s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32645
functional_test.go:1680: http://192.168.49.2:32645: success! body:
Request served by hello-node-connect-9f67c86d4-rfpvh

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32645
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (11.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (28.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [5d927f70-e5cd-43ca-9a00-eb32671e9052] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.017296299s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-819971 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-819971 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-819971 get pvc myclaim -o=json
I1217 08:00:35.748034  556055 retry.go:31] will retry after 2.713576033s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:6b225b6d-0d9e-4b1d-afb9-3d11d53e5106 ResourceVersion:652 Generation:0 CreationTimestamp:2025-12-17 08:00:35 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc000bbdf50 VolumeMode:0xc000bbdf60 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-819971 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-819971 apply -f testdata/storage-provisioner/pod.yaml
I1217 08:00:38.663335  556055 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [eae0d1ea-8d08-4381-a8c9-90942a95cc51] Pending
helpers_test.go:353: "sp-pod" [eae0d1ea-8d08-4381-a8c9-90942a95cc51] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [eae0d1ea-8d08-4381-a8c9-90942a95cc51] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.006551689s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-819971 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-819971 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-819971 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [aa0964bf-769e-4f10-a6b9-e5d82ce4b3f6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [aa0964bf-769e-4f10-a6b9-e5d82ce4b3f6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004017096s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-819971 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (28.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh -n functional-819971 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 cp functional-819971:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm84294195/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh -n functional-819971 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh -n functional-819971 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (23.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-819971 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-sgf6k" [053e795e-c022-4283-b10d-274effd7252a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-sgf6k" [053e795e-c022-4283-b10d-274effd7252a] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 19.003176021s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-819971 exec mysql-7d7b65bc95-sgf6k -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-819971 exec mysql-7d7b65bc95-sgf6k -- mysql -ppassword -e "show databases;": exit status 1 (135.619522ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 08:00:46.364435  556055 retry.go:31] will retry after 809.081285ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-819971 exec mysql-7d7b65bc95-sgf6k -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-819971 exec mysql-7d7b65bc95-sgf6k -- mysql -ppassword -e "show databases;": exit status 1 (104.899927ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 08:00:47.279461  556055 retry.go:31] will retry after 1.519550552s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-819971 exec mysql-7d7b65bc95-sgf6k -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-819971 exec mysql-7d7b65bc95-sgf6k -- mysql -ppassword -e "show databases;": exit status 1 (93.011805ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 08:00:48.892789  556055 retry.go:31] will retry after 1.592413085s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-819971 exec mysql-7d7b65bc95-sgf6k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (23.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/556055/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo cat /etc/test/nested/copy/556055/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/556055.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo cat /etc/ssl/certs/556055.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/556055.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo cat /usr/share/ca-certificates/556055.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5560552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo cat /etc/ssl/certs/5560552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5560552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo cat /usr/share/ca-certificates/5560552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-819971 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819971 ssh "sudo systemctl is-active docker": exit status 1 (331.540405ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819971 ssh "sudo systemctl is-active containerd": exit status 1 (320.722488ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-819971 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-819971
localhost/kicbase/echo-server:functional-819971
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-819971 image ls --format short --alsologtostderr:
I1217 08:00:59.804167  614079 out.go:360] Setting OutFile to fd 1 ...
I1217 08:00:59.804461  614079 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:00:59.804472  614079 out.go:374] Setting ErrFile to fd 2...
I1217 08:00:59.804476  614079 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:00:59.804710  614079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
I1217 08:00:59.805342  614079 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:00:59.805436  614079 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:00:59.805914  614079 cli_runner.go:164] Run: docker container inspect functional-819971 --format={{.State.Status}}
I1217 08:00:59.830452  614079 ssh_runner.go:195] Run: systemctl --version
I1217 08:00:59.830523  614079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819971
I1217 08:00:59.852957  614079 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/functional-819971/id_ed25519 Username:docker}
I1217 08:00:59.947147  614079 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-819971 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1                          │ af0321f3a4f38 │ 72MB   │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1                          │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1                          │ 58865405a13bc │ 90.8MB │
│ docker.io/kicbase/echo-server           │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-819971                     │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-819971                     │ b158322e9d6ff │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1                          │ 5032a56602e1b │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/etcd                    │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                                   │ 20d0be4ee4524 │ 804MB  │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/pause                   │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest                                │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-819971 image ls --format table --alsologtostderr:
I1217 08:01:02.309775  615453 out.go:360] Setting OutFile to fd 1 ...
I1217 08:01:02.310063  615453 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:01:02.310075  615453 out.go:374] Setting ErrFile to fd 2...
I1217 08:01:02.310080  615453 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:01:02.310338  615453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
I1217 08:01:02.310984  615453 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:01:02.311096  615453 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:01:02.311579  615453 cli_runner.go:164] Run: docker container inspect functional-819971 --format={{.State.Status}}
I1217 08:01:02.331256  615453 ssh_runner.go:195] Run: systemctl --version
I1217 08:01:02.331317  615453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819971
I1217 08:01:02.351587  615453 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/functional-819971/id_ed25519 Username:docker}
I1217 08:01:02.448310  615453 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-819971 image ls --format json --alsologtostderr:
[{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server
@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-819971"],"size":"4943877"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["regi
stry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bd
fec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k
8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests
":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2
722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"b158322e9d6ffe0a4c9d95ec9dbc367c61b122be0019b526b7caf
a4fa245eef7","repoDigests":["localhost/minikube-local-cache-test@sha256:6522116068c20da7671947b36292f5e002943fa7ace3f429b158e370175d657e"],"repoTags":["localhost/minikube-local-cache-test:functional-819971"],"size":"3330"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-819971 image ls --format json --alsologtostderr:
I1217 08:01:02.049494  615344 out.go:360] Setting OutFile to fd 1 ...
I1217 08:01:02.049624  615344 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:01:02.049634  615344 out.go:374] Setting ErrFile to fd 2...
I1217 08:01:02.049638  615344 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:01:02.049863  615344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
I1217 08:01:02.050495  615344 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:01:02.050602  615344 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:01:02.051080  615344 cli_runner.go:164] Run: docker container inspect functional-819971 --format={{.State.Status}}
I1217 08:01:02.071956  615344 ssh_runner.go:195] Run: systemctl --version
I1217 08:01:02.072009  615344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819971
I1217 08:01:02.091509  615344 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/functional-819971/id_ed25519 Username:docker}
I1217 08:01:02.188561  615344 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-819971 image ls --format yaml --alsologtostderr:
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: b158322e9d6ffe0a4c9d95ec9dbc367c61b122be0019b526b7cafa4fa245eef7
repoDigests:
- localhost/minikube-local-cache-test@sha256:6522116068c20da7671947b36292f5e002943fa7ace3f429b158e370175d657e
repoTags:
- localhost/minikube-local-cache-test:functional-819971
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-819971
size: "4943877"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-819971 image ls --format yaml --alsologtostderr:
I1217 08:01:00.051388  614226 out.go:360] Setting OutFile to fd 1 ...
I1217 08:01:00.051518  614226 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:01:00.051552  614226 out.go:374] Setting ErrFile to fd 2...
I1217 08:01:00.051561  614226 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:01:00.051770  614226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
I1217 08:01:00.052370  614226 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:01:00.052484  614226 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:01:00.052985  614226 cli_runner.go:164] Run: docker container inspect functional-819971 --format={{.State.Status}}
I1217 08:01:00.072809  614226 ssh_runner.go:195] Run: systemctl --version
I1217 08:01:00.072869  614226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819971
I1217 08:01:00.092503  614226 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/functional-819971/id_ed25519 Username:docker}
I1217 08:01:00.198904  614226 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819971 ssh pgrep buildkitd: exit status 1 (473.684892ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image build -t localhost/my-image:functional-819971 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 image build -t localhost/my-image:functional-819971 testdata/build --alsologtostderr: (3.070814437s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-819971 image build -t localhost/my-image:functional-819971 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2765221d45a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-819971
--> 89df5b880eb
Successfully tagged localhost/my-image:functional-819971
89df5b880eb2a6ac0d7a60c0417f2c141146d297181c62e1a89b859c3e741364
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-819971 image build -t localhost/my-image:functional-819971 testdata/build --alsologtostderr:
I1217 08:01:00.844527  614907 out.go:360] Setting OutFile to fd 1 ...
I1217 08:01:00.844670  614907 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:01:00.844682  614907 out.go:374] Setting ErrFile to fd 2...
I1217 08:01:00.844689  614907 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:01:00.845004  614907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
I1217 08:01:00.845833  614907 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:01:00.846845  614907 config.go:182] Loaded profile config "functional-819971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:01:00.847622  614907 cli_runner.go:164] Run: docker container inspect functional-819971 --format={{.State.Status}}
I1217 08:01:00.873213  614907 ssh_runner.go:195] Run: systemctl --version
I1217 08:01:00.873295  614907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819971
I1217 08:01:00.898658  614907 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/functional-819971/id_ed25519 Username:docker}
I1217 08:01:01.006125  614907 build_images.go:162] Building image from path: /tmp/build.3528897963.tar
I1217 08:01:01.006210  614907 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 08:01:01.015763  614907 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3528897963.tar
I1217 08:01:01.020852  614907 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3528897963.tar: stat -c "%s %y" /var/lib/minikube/build/build.3528897963.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3528897963.tar': No such file or directory
I1217 08:01:01.020889  614907 ssh_runner.go:362] scp /tmp/build.3528897963.tar --> /var/lib/minikube/build/build.3528897963.tar (3072 bytes)
I1217 08:01:01.043150  614907 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3528897963
I1217 08:01:01.052697  614907 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3528897963 -xf /var/lib/minikube/build/build.3528897963.tar
I1217 08:01:01.062750  614907 crio.go:315] Building image: /var/lib/minikube/build/build.3528897963
I1217 08:01:01.062837  614907 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-819971 /var/lib/minikube/build/build.3528897963 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 08:01:03.807189  614907 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-819971 /var/lib/minikube/build/build.3528897963 --cgroup-manager=cgroupfs: (2.744317854s)
I1217 08:01:03.807280  614907 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3528897963
I1217 08:01:03.816562  614907 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3528897963.tar
I1217 08:01:03.825709  614907 build_images.go:218] Built localhost/my-image:functional-819971 from /tmp/build.3528897963.tar
I1217 08:01:03.825742  614907 build_images.go:134] succeeded building to: functional-819971
I1217 08:01:03.825748  614907 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-819971
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image load --daemon kicbase/echo-server:functional-819971 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 image load --daemon kicbase/echo-server:functional-819971 --alsologtostderr: (1.142734354s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image load --daemon kicbase/echo-server:functional-819971 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-819971 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-819971 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-819971 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 608539: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-819971 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-819971 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (19.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-819971 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [8669b647-1d8c-4250-a03c-f35efc5bdb35] Pending
helpers_test.go:353: "nginx-svc" [8669b647-1d8c-4250-a03c-f35efc5bdb35] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [8669b647-1d8c-4250-a03c-f35efc5bdb35] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 19.003092317s
I1217 08:00:49.015474  556055 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (19.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image save kicbase/echo-server:functional-819971 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 image save kicbase/echo-server:functional-819971 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.57971854s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image rm kicbase/echo-server:functional-819971 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-819971
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 image save --daemon kicbase/echo-server:functional-819971 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-819971
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-819971 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.13.30 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-819971 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-819971 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-819971 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-qfq66" [37d4ea82-98d8-4fe7-abcc-59e926aaaa5c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-qfq66" [37d4ea82-98d8-4fe7-abcc-59e926aaaa5c] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.00724815s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "432.571201ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "75.119345ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (7.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2751610430/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765958450604915580" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2751610430/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765958450604915580" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2751610430/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765958450604915580" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2751610430/001/test-1765958450604915580
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (342.677066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 08:00:50.947908  556055 retry.go:31] will retry after 614.349669ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T /mount-9p | grep 9p"
I1217 08:00:51.629203  556055 detect.go:223] nested VM detected
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 08:00 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 08:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 08:00 test-1765958450604915580
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh cat /mount-9p/test-1765958450604915580
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-819971 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [352d7e64-fa3a-412a-8365-663f8745f8ba] Pending
helpers_test.go:353: "busybox-mount" [352d7e64-fa3a-412a-8365-663f8745f8ba] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [352d7e64-fa3a-412a-8365-663f8745f8ba] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [352d7e64-fa3a-412a-8365-663f8745f8ba] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004575365s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-819971 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2751610430/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (7.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "362.517038ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "90.57195ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2859970571/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (394.851831ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 08:00:58.241584  556055 retry.go:31] will retry after 733.567592ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2859970571/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819971 ssh "sudo umount -f /mount-9p": exit status 1 (295.778668ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-819971 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2859970571/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (1.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 service list: (1.818839383s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (1.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1333544889/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1333544889/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1333544889/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T" /mount1: exit status 1 (481.889045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 08:01:00.591838  556055 retry.go:31] will retry after 424.07512ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-819971 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1333544889/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1333544889/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819971 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1333544889/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (1.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-819971 service list -o json: (1.974098662s)
functional_test.go:1504: Took "1.974222572s" to run "out/minikube-linux-amd64 -p functional-819971 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (1.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30696
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-819971 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30696
2025/12/17 08:01:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-819971
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-819971
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-819971
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (143.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 08:01:54.519795  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:22.221128  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:37.555737  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:37.562177  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:37.573633  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:37.595164  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:37.636656  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:37.718089  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:37.879678  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:38.201369  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:38.843437  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:40.125000  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:42.686722  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:47.808693  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:02:58.050057  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:03:18.532099  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m22.676891147s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (143.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 kubectl -- rollout status deployment/busybox: (3.464684197s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-czq2m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-g5mnz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-np5qh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-czq2m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-g5mnz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-np5qh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-czq2m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-g5mnz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-np5qh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-czq2m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-czq2m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-g5mnz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-g5mnz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-np5qh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 kubectl -- exec busybox-7b57f96db7-np5qh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 node add --alsologtostderr -v 5
E1217 08:03:59.493708  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 node add --alsologtostderr -v 5: (31.937907702s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-046400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp testdata/cp-test.txt ha-046400:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1602535105/001/cp-test_ha-046400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400:/home/docker/cp-test.txt ha-046400-m02:/home/docker/cp-test_ha-046400_ha-046400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m02 "sudo cat /home/docker/cp-test_ha-046400_ha-046400-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400:/home/docker/cp-test.txt ha-046400-m03:/home/docker/cp-test_ha-046400_ha-046400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m03 "sudo cat /home/docker/cp-test_ha-046400_ha-046400-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400:/home/docker/cp-test.txt ha-046400-m04:/home/docker/cp-test_ha-046400_ha-046400-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m04 "sudo cat /home/docker/cp-test_ha-046400_ha-046400-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp testdata/cp-test.txt ha-046400-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1602535105/001/cp-test_ha-046400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m02:/home/docker/cp-test.txt ha-046400:/home/docker/cp-test_ha-046400-m02_ha-046400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400 "sudo cat /home/docker/cp-test_ha-046400-m02_ha-046400.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m02:/home/docker/cp-test.txt ha-046400-m03:/home/docker/cp-test_ha-046400-m02_ha-046400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m03 "sudo cat /home/docker/cp-test_ha-046400-m02_ha-046400-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m02:/home/docker/cp-test.txt ha-046400-m04:/home/docker/cp-test_ha-046400-m02_ha-046400-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m04 "sudo cat /home/docker/cp-test_ha-046400-m02_ha-046400-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp testdata/cp-test.txt ha-046400-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1602535105/001/cp-test_ha-046400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m03:/home/docker/cp-test.txt ha-046400:/home/docker/cp-test_ha-046400-m03_ha-046400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400 "sudo cat /home/docker/cp-test_ha-046400-m03_ha-046400.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m03:/home/docker/cp-test.txt ha-046400-m02:/home/docker/cp-test_ha-046400-m03_ha-046400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m02 "sudo cat /home/docker/cp-test_ha-046400-m03_ha-046400-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m03:/home/docker/cp-test.txt ha-046400-m04:/home/docker/cp-test_ha-046400-m03_ha-046400-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m04 "sudo cat /home/docker/cp-test_ha-046400-m03_ha-046400-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp testdata/cp-test.txt ha-046400-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1602535105/001/cp-test_ha-046400-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m04:/home/docker/cp-test.txt ha-046400:/home/docker/cp-test_ha-046400-m04_ha-046400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400 "sudo cat /home/docker/cp-test_ha-046400-m04_ha-046400.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m04:/home/docker/cp-test.txt ha-046400-m02:/home/docker/cp-test_ha-046400-m04_ha-046400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m02 "sudo cat /home/docker/cp-test_ha-046400-m04_ha-046400-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 cp ha-046400-m04:/home/docker/cp-test.txt ha-046400-m03:/home/docker/cp-test_ha-046400-m04_ha-046400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 ssh -n ha-046400-m03 "sudo cat /home/docker/cp-test_ha-046400-m04_ha-046400-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 node stop m02 --alsologtostderr -v 5: (18.674976318s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-046400 status --alsologtostderr -v 5: exit status 7 (725.386312ms)

                                                
                                                
-- stdout --
	ha-046400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-046400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-046400-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-046400-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:04:48.723219  635877 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:04:48.723334  635877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:04:48.723344  635877 out.go:374] Setting ErrFile to fd 2...
	I1217 08:04:48.723348  635877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:04:48.723578  635877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:04:48.723746  635877 out.go:368] Setting JSON to false
	I1217 08:04:48.723781  635877 mustload.go:66] Loading cluster: ha-046400
	I1217 08:04:48.723903  635877 notify.go:221] Checking for updates...
	I1217 08:04:48.724332  635877 config.go:182] Loaded profile config "ha-046400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:04:48.724348  635877 status.go:174] checking status of ha-046400 ...
	I1217 08:04:48.724848  635877 cli_runner.go:164] Run: docker container inspect ha-046400 --format={{.State.Status}}
	I1217 08:04:48.746070  635877 status.go:371] ha-046400 host status = "Running" (err=<nil>)
	I1217 08:04:48.746116  635877 host.go:66] Checking if "ha-046400" exists ...
	I1217 08:04:48.746405  635877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-046400
	I1217 08:04:48.766731  635877 host.go:66] Checking if "ha-046400" exists ...
	I1217 08:04:48.767047  635877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:04:48.767091  635877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-046400
	I1217 08:04:48.785528  635877 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/ha-046400/id_ed25519 Username:docker}
	I1217 08:04:48.877306  635877 ssh_runner.go:195] Run: systemctl --version
	I1217 08:04:48.883735  635877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:04:48.896817  635877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:04:48.959313  635877 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 08:04:48.948905545 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:04:48.960026  635877 kubeconfig.go:125] found "ha-046400" server: "https://192.168.49.254:8443"
	I1217 08:04:48.960064  635877 api_server.go:166] Checking apiserver status ...
	I1217 08:04:48.960102  635877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:04:48.973335  635877 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W1217 08:04:48.984272  635877 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:04:48.984319  635877 ssh_runner.go:195] Run: ls
	I1217 08:04:48.988480  635877 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 08:04:48.992927  635877 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 08:04:48.992957  635877 status.go:463] ha-046400 apiserver status = Running (err=<nil>)
	I1217 08:04:48.992982  635877 status.go:176] ha-046400 status: &{Name:ha-046400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:04:48.993004  635877 status.go:174] checking status of ha-046400-m02 ...
	I1217 08:04:48.993241  635877 cli_runner.go:164] Run: docker container inspect ha-046400-m02 --format={{.State.Status}}
	I1217 08:04:49.013369  635877 status.go:371] ha-046400-m02 host status = "Stopped" (err=<nil>)
	I1217 08:04:49.013398  635877 status.go:384] host is not running, skipping remaining checks
	I1217 08:04:49.013405  635877 status.go:176] ha-046400-m02 status: &{Name:ha-046400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:04:49.013438  635877 status.go:174] checking status of ha-046400-m03 ...
	I1217 08:04:49.013797  635877 cli_runner.go:164] Run: docker container inspect ha-046400-m03 --format={{.State.Status}}
	I1217 08:04:49.032984  635877 status.go:371] ha-046400-m03 host status = "Running" (err=<nil>)
	I1217 08:04:49.033010  635877 host.go:66] Checking if "ha-046400-m03" exists ...
	I1217 08:04:49.033250  635877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-046400-m03
	I1217 08:04:49.052213  635877 host.go:66] Checking if "ha-046400-m03" exists ...
	I1217 08:04:49.052468  635877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:04:49.052502  635877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-046400-m03
	I1217 08:04:49.071077  635877 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33200 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/ha-046400-m03/id_ed25519 Username:docker}
	I1217 08:04:49.162176  635877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:04:49.175787  635877 kubeconfig.go:125] found "ha-046400" server: "https://192.168.49.254:8443"
	I1217 08:04:49.175819  635877 api_server.go:166] Checking apiserver status ...
	I1217 08:04:49.175857  635877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:04:49.187933  635877 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1205/cgroup
	W1217 08:04:49.196639  635877 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1205/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:04:49.196700  635877 ssh_runner.go:195] Run: ls
	I1217 08:04:49.200701  635877 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 08:04:49.204990  635877 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 08:04:49.205022  635877 status.go:463] ha-046400-m03 apiserver status = Running (err=<nil>)
	I1217 08:04:49.205034  635877 status.go:176] ha-046400-m03 status: &{Name:ha-046400-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:04:49.205054  635877 status.go:174] checking status of ha-046400-m04 ...
	I1217 08:04:49.205376  635877 cli_runner.go:164] Run: docker container inspect ha-046400-m04 --format={{.State.Status}}
	I1217 08:04:49.225099  635877 status.go:371] ha-046400-m04 host status = "Running" (err=<nil>)
	I1217 08:04:49.225124  635877 host.go:66] Checking if "ha-046400-m04" exists ...
	I1217 08:04:49.225380  635877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-046400-m04
	I1217 08:04:49.244389  635877 host.go:66] Checking if "ha-046400-m04" exists ...
	I1217 08:04:49.244726  635877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:04:49.244787  635877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-046400-m04
	I1217 08:04:49.263915  635877 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/ha-046400-m04/id_ed25519 Username:docker}
	I1217 08:04:49.355593  635877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:04:49.380489  635877 status.go:176] ha-046400-m04 status: &{Name:ha-046400-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 node start m02 --alsologtostderr -v 5: (13.666768124s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 stop --alsologtostderr -v 5
E1217 08:05:21.415743  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:27.226738  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:27.233247  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:27.244671  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:27.266122  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:27.307677  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:27.389327  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:27.551205  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:27.873281  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:28.514929  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:29.796662  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:32.358702  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:37.481612  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:05:47.723729  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 stop --alsologtostderr -v 5: (49.310636588s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 start --wait true --alsologtostderr -v 5
E1217 08:06:08.205952  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:06:49.168189  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:06:54.516299  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 start --wait true --alsologtostderr -v 5: (1m2.839281406s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 node delete m03 --alsologtostderr -v 5: (9.886113775s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 stop --alsologtostderr -v 5
E1217 08:07:37.556057  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 stop --alsologtostderr -v 5: (43.101611843s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-046400 status --alsologtostderr -v 5: exit status 7 (123.781306ms)

                                                
                                                
-- stdout --
	ha-046400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-046400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-046400-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:07:52.670158  649974 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:07:52.670258  649974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:07:52.670262  649974 out.go:374] Setting ErrFile to fd 2...
	I1217 08:07:52.670266  649974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:07:52.670491  649974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:07:52.670678  649974 out.go:368] Setting JSON to false
	I1217 08:07:52.670714  649974 mustload.go:66] Loading cluster: ha-046400
	I1217 08:07:52.670792  649974 notify.go:221] Checking for updates...
	I1217 08:07:52.671075  649974 config.go:182] Loaded profile config "ha-046400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:07:52.671091  649974 status.go:174] checking status of ha-046400 ...
	I1217 08:07:52.671505  649974 cli_runner.go:164] Run: docker container inspect ha-046400 --format={{.State.Status}}
	I1217 08:07:52.692205  649974 status.go:371] ha-046400 host status = "Stopped" (err=<nil>)
	I1217 08:07:52.692236  649974 status.go:384] host is not running, skipping remaining checks
	I1217 08:07:52.692264  649974 status.go:176] ha-046400 status: &{Name:ha-046400 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:07:52.692355  649974 status.go:174] checking status of ha-046400-m02 ...
	I1217 08:07:52.692687  649974 cli_runner.go:164] Run: docker container inspect ha-046400-m02 --format={{.State.Status}}
	I1217 08:07:52.711274  649974 status.go:371] ha-046400-m02 host status = "Stopped" (err=<nil>)
	I1217 08:07:52.711327  649974 status.go:384] host is not running, skipping remaining checks
	I1217 08:07:52.711342  649974 status.go:176] ha-046400-m02 status: &{Name:ha-046400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:07:52.711375  649974 status.go:174] checking status of ha-046400-m04 ...
	I1217 08:07:52.711677  649974 cli_runner.go:164] Run: docker container inspect ha-046400-m04 --format={{.State.Status}}
	I1217 08:07:52.729527  649974 status.go:371] ha-046400-m04 host status = "Stopped" (err=<nil>)
	I1217 08:07:52.729565  649974 status.go:384] host is not running, skipping remaining checks
	I1217 08:07:52.729572  649974 status.go:176] ha-046400-m04 status: &{Name:ha-046400-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 08:08:05.257855  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:08:11.090052  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.232462533s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-046400 node add --control-plane --alsologtostderr -v 5: (41.298576989s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-046400 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-853267 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-853267 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (43.727206789s)
--- PASS: TestJSONOutput/start/Command (43.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.24s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-853267 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-853267 --output=json --user=testUser: (6.241319486s)
--- PASS: TestJSONOutput/stop/Command (6.24s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.31s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-102818 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-102818 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (109.07525ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"528f65a7-92a1-43e6-8fc2-533b417a7370","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-102818] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"73ac7976-2f83-4e15-a746-3bfdbe050591","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22182"}}
	{"specversion":"1.0","id":"69754ee8-d30d-41e3-a872-265fbdf0ae89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"69f3d844-69b0-441b-99c7-727ac87f6c55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig"}}
	{"specversion":"1.0","id":"329ba7cc-3145-48e2-bc5f-a9426bacd6eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube"}}
	{"specversion":"1.0","id":"2e283310-9a63-493c-88c5-7fe4fad6495f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cbdcec02-89d5-4db8-b50f-a0f02dfe551a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a1bb1b36-75aa-48d7-a3ae-3b4e8df6c38d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-102818" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-102818
--- PASS: TestErrorJSONOutput (0.31s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-190537 --network=
E1217 08:10:54.931834  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-190537 --network=: (34.318907132s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-190537" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-190537
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-190537: (2.259238204s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-229958 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-229958 --network=bridge: (24.311696336s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-229958" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-229958
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-229958: (2.109169238s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.44s)

                                                
                                    
x
+
TestKicExistingNetwork (26.36s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1217 08:11:43.400947  556055 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 08:11:43.420625  556055 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 08:11:43.420701  556055 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1217 08:11:43.420722  556055 cli_runner.go:164] Run: docker network inspect existing-network
W1217 08:11:43.442161  556055 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1217 08:11:43.442197  556055 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1217 08:11:43.442233  556055 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1217 08:11:43.442491  556055 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 08:11:43.461982  556055 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-971513c2879b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:b9:48:a1:bc:14} reservation:<nil>}
I1217 08:11:43.462366  556055 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e14f80}
I1217 08:11:43.462482  556055 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1217 08:11:43.462568  556055 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1217 08:11:43.518231  556055 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-265668 --network=existing-network
E1217 08:11:54.518891  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-265668 --network=existing-network: (24.084198676s)
helpers_test.go:176: Cleaning up "existing-network-265668" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-265668
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-265668: (2.117604886s)
I1217 08:12:09.738969  556055 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.36s)

                                                
                                    
x
+
TestKicCustomSubnet (27.61s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-942974 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-942974 --subnet=192.168.60.0/24: (25.320334523s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-942974 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-942974" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-942974
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-942974: (2.264231714s)
--- PASS: TestKicCustomSubnet (27.61s)

                                                
                                    
x
+
TestKicStaticIP (27.19s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-519584 --static-ip=192.168.200.200
E1217 08:12:37.556374  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-519584 --static-ip=192.168.200.200: (24.809238387s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-519584 ip
helpers_test.go:176: Cleaning up "static-ip-519584" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-519584
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-519584: (2.218864555s)
--- PASS: TestKicStaticIP (27.19s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (50.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-564034 --driver=docker  --container-runtime=crio
E1217 08:13:17.582501  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-564034 --driver=docker  --container-runtime=crio: (22.257601368s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-567120 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-567120 --driver=docker  --container-runtime=crio: (22.469609451s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-564034
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-567120
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-567120" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-567120
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-567120: (2.412289184s)
helpers_test.go:176: Cleaning up "first-564034" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-564034
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-564034: (2.450111509s)
--- PASS: TestMinikubeProfile (50.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-468308 --memory=3072 --mount-string /tmp/TestMountStartserial2529157566/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-468308 --memory=3072 --mount-string /tmp/TestMountStartserial2529157566/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.801168082s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-468308 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-482990 --memory=3072 --mount-string /tmp/TestMountStartserial2529157566/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-482990 --memory=3072 --mount-string /tmp/TestMountStartserial2529157566/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.958200621s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-482990 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-468308 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-468308 --alsologtostderr -v=5: (1.700997806s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-482990 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-482990
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-482990: (1.271032828s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-482990
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-482990: (7.009811606s)
--- PASS: TestMountStart/serial/RestartStopped (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-482990 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-488544 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1217 08:15:27.226584  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-488544 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m12.568522795s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-488544 -- rollout status deployment/busybox: (2.837942982s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- exec busybox-7b57f96db7-5clr5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- exec busybox-7b57f96db7-qqd4f -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- exec busybox-7b57f96db7-5clr5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- exec busybox-7b57f96db7-qqd4f -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- exec busybox-7b57f96db7-5clr5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- exec busybox-7b57f96db7-qqd4f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- exec busybox-7b57f96db7-5clr5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- exec busybox-7b57f96db7-5clr5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- exec busybox-7b57f96db7-qqd4f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-488544 -- exec busybox-7b57f96db7-qqd4f -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-488544 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-488544 -v=5 --alsologtostderr: (27.570411974s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-488544 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp testdata/cp-test.txt multinode-488544:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp multinode-488544:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3739461923/001/cp-test_multinode-488544.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp multinode-488544:/home/docker/cp-test.txt multinode-488544-m02:/home/docker/cp-test_multinode-488544_multinode-488544-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m02 "sudo cat /home/docker/cp-test_multinode-488544_multinode-488544-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp multinode-488544:/home/docker/cp-test.txt multinode-488544-m03:/home/docker/cp-test_multinode-488544_multinode-488544-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m03 "sudo cat /home/docker/cp-test_multinode-488544_multinode-488544-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp testdata/cp-test.txt multinode-488544-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp multinode-488544-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3739461923/001/cp-test_multinode-488544-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp multinode-488544-m02:/home/docker/cp-test.txt multinode-488544:/home/docker/cp-test_multinode-488544-m02_multinode-488544.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544 "sudo cat /home/docker/cp-test_multinode-488544-m02_multinode-488544.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp multinode-488544-m02:/home/docker/cp-test.txt multinode-488544-m03:/home/docker/cp-test_multinode-488544-m02_multinode-488544-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m03 "sudo cat /home/docker/cp-test_multinode-488544-m02_multinode-488544-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp testdata/cp-test.txt multinode-488544-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp multinode-488544-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3739461923/001/cp-test_multinode-488544-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp multinode-488544-m03:/home/docker/cp-test.txt multinode-488544:/home/docker/cp-test_multinode-488544-m03_multinode-488544.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544 "sudo cat /home/docker/cp-test_multinode-488544-m03_multinode-488544.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 cp multinode-488544-m03:/home/docker/cp-test.txt multinode-488544-m02:/home/docker/cp-test_multinode-488544-m03_multinode-488544-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 ssh -n multinode-488544-m02 "sudo cat /home/docker/cp-test_multinode-488544-m03_multinode-488544-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-488544 node stop m03: (1.296391881s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-488544 status: exit status 7 (509.610599ms)

                                                
                                                
-- stdout --
	multinode-488544
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-488544-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-488544-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-488544 status --alsologtostderr: exit status 7 (511.06943ms)

                                                
                                                
-- stdout --
	multinode-488544
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-488544-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-488544-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:16:24.374645  709817 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:16:24.374891  709817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:16:24.374899  709817 out.go:374] Setting ErrFile to fd 2...
	I1217 08:16:24.374903  709817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:16:24.375131  709817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:16:24.375286  709817 out.go:368] Setting JSON to false
	I1217 08:16:24.375325  709817 mustload.go:66] Loading cluster: multinode-488544
	I1217 08:16:24.375472  709817 notify.go:221] Checking for updates...
	I1217 08:16:24.375704  709817 config.go:182] Loaded profile config "multinode-488544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:16:24.375720  709817 status.go:174] checking status of multinode-488544 ...
	I1217 08:16:24.376149  709817 cli_runner.go:164] Run: docker container inspect multinode-488544 --format={{.State.Status}}
	I1217 08:16:24.395041  709817 status.go:371] multinode-488544 host status = "Running" (err=<nil>)
	I1217 08:16:24.395071  709817 host.go:66] Checking if "multinode-488544" exists ...
	I1217 08:16:24.395355  709817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-488544
	I1217 08:16:24.414873  709817 host.go:66] Checking if "multinode-488544" exists ...
	I1217 08:16:24.415290  709817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:16:24.415370  709817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-488544
	I1217 08:16:24.434751  709817 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33310 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/multinode-488544/id_ed25519 Username:docker}
	I1217 08:16:24.526098  709817 ssh_runner.go:195] Run: systemctl --version
	I1217 08:16:24.532769  709817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:16:24.546557  709817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:16:24.603427  709817 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 08:16:24.592999589 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:16:24.604295  709817 kubeconfig.go:125] found "multinode-488544" server: "https://192.168.67.2:8443"
	I1217 08:16:24.604361  709817 api_server.go:166] Checking apiserver status ...
	I1217 08:16:24.604413  709817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:16:24.616467  709817 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1269/cgroup
	W1217 08:16:24.625372  709817 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1269/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:16:24.625427  709817 ssh_runner.go:195] Run: ls
	I1217 08:16:24.629459  709817 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1217 08:16:24.633822  709817 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1217 08:16:24.633849  709817 status.go:463] multinode-488544 apiserver status = Running (err=<nil>)
	I1217 08:16:24.633859  709817 status.go:176] multinode-488544 status: &{Name:multinode-488544 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:16:24.633881  709817 status.go:174] checking status of multinode-488544-m02 ...
	I1217 08:16:24.634125  709817 cli_runner.go:164] Run: docker container inspect multinode-488544-m02 --format={{.State.Status}}
	I1217 08:16:24.652904  709817 status.go:371] multinode-488544-m02 host status = "Running" (err=<nil>)
	I1217 08:16:24.652930  709817 host.go:66] Checking if "multinode-488544-m02" exists ...
	I1217 08:16:24.653186  709817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-488544-m02
	I1217 08:16:24.672555  709817 host.go:66] Checking if "multinode-488544-m02" exists ...
	I1217 08:16:24.672859  709817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:16:24.672909  709817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-488544-m02
	I1217 08:16:24.692653  709817 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:33315 SSHKeyPath:/home/jenkins/minikube-integration/22182-552461/.minikube/machines/multinode-488544-m02/id_ed25519 Username:docker}
	I1217 08:16:24.784314  709817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:16:24.797285  709817 status.go:176] multinode-488544-m02 status: &{Name:multinode-488544-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:16:24.797324  709817 status.go:174] checking status of multinode-488544-m03 ...
	I1217 08:16:24.797673  709817 cli_runner.go:164] Run: docker container inspect multinode-488544-m03 --format={{.State.Status}}
	I1217 08:16:24.817667  709817 status.go:371] multinode-488544-m03 host status = "Stopped" (err=<nil>)
	I1217 08:16:24.817693  709817 status.go:384] host is not running, skipping remaining checks
	I1217 08:16:24.817702  709817 status.go:176] multinode-488544-m03 status: &{Name:multinode-488544-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-488544 node start m03 -v=5 --alsologtostderr: (6.571267528s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-488544
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-488544
E1217 08:16:54.523114  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-488544: (31.49147304s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-488544 --wait=true -v=5 --alsologtostderr
E1217 08:17:37.556346  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-488544 --wait=true -v=5 --alsologtostderr: (50.466581323s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-488544
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-488544 node delete m03: (4.73560687s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-488544 stop: (28.502562196s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-488544 status: exit status 7 (105.099366ms)

                                                
                                                
-- stdout --
	multinode-488544
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-488544-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-488544 status --alsologtostderr: exit status 7 (107.398666ms)

                                                
                                                
-- stdout --
	multinode-488544
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-488544-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:18:28.224688  720074 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:18:28.224789  720074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:18:28.224796  720074 out.go:374] Setting ErrFile to fd 2...
	I1217 08:18:28.224801  720074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:18:28.225008  720074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:18:28.225168  720074 out.go:368] Setting JSON to false
	I1217 08:18:28.225200  720074 mustload.go:66] Loading cluster: multinode-488544
	I1217 08:18:28.225322  720074 notify.go:221] Checking for updates...
	I1217 08:18:28.225612  720074 config.go:182] Loaded profile config "multinode-488544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:18:28.225631  720074 status.go:174] checking status of multinode-488544 ...
	I1217 08:18:28.226168  720074 cli_runner.go:164] Run: docker container inspect multinode-488544 --format={{.State.Status}}
	I1217 08:18:28.249111  720074 status.go:371] multinode-488544 host status = "Stopped" (err=<nil>)
	I1217 08:18:28.249133  720074 status.go:384] host is not running, skipping remaining checks
	I1217 08:18:28.249140  720074 status.go:176] multinode-488544 status: &{Name:multinode-488544 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:18:28.249175  720074 status.go:174] checking status of multinode-488544-m02 ...
	I1217 08:18:28.249435  720074 cli_runner.go:164] Run: docker container inspect multinode-488544-m02 --format={{.State.Status}}
	I1217 08:18:28.269059  720074 status.go:371] multinode-488544-m02 host status = "Stopped" (err=<nil>)
	I1217 08:18:28.269080  720074 status.go:384] host is not running, skipping remaining checks
	I1217 08:18:28.269086  720074 status.go:176] multinode-488544-m02 status: &{Name:multinode-488544-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-488544 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1217 08:19:00.620203  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-488544 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (45.360089239s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-488544 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-488544
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-488544-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-488544-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.118714ms)

                                                
                                                
-- stdout --
	* [multinode-488544-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-488544-m02' is duplicated with machine name 'multinode-488544-m02' in profile 'multinode-488544'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-488544-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-488544-m03 --driver=docker  --container-runtime=crio: (24.630527111s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-488544
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-488544: exit status 80 (304.233762ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-488544 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-488544-m03 already exists in multinode-488544-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-488544-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-488544-m03: (2.431439333s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.52s)

                                                
                                    
x
+
TestPreload (112.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-217458 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1217 08:20:27.226064  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-217458 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (57.611813414s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-217458 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-217458 image pull gcr.io/k8s-minikube/busybox: (2.715022493s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-217458
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-217458: (6.160237846s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-217458 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-217458 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (43.342889071s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-217458 image list
helpers_test.go:176: Cleaning up "test-preload-217458" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-217458
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-217458: (2.420774992s)
--- PASS: TestPreload (112.49s)

                                                
                                    
x
+
TestScheduledStopUnix (100.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-309868 --memory=3072 --driver=docker  --container-runtime=crio
E1217 08:21:50.293739  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-819971/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:21:54.519473  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-309868 --memory=3072 --driver=docker  --container-runtime=crio: (24.083841207s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-309868 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 08:22:02.762505  737244 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:22:02.762831  737244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:22:02.762845  737244 out.go:374] Setting ErrFile to fd 2...
	I1217 08:22:02.762849  737244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:22:02.763122  737244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:22:02.763408  737244 out.go:368] Setting JSON to false
	I1217 08:22:02.763507  737244 mustload.go:66] Loading cluster: scheduled-stop-309868
	I1217 08:22:02.763869  737244 config.go:182] Loaded profile config "scheduled-stop-309868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:22:02.763938  737244 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/config.json ...
	I1217 08:22:02.764116  737244 mustload.go:66] Loading cluster: scheduled-stop-309868
	I1217 08:22:02.764206  737244 config.go:182] Loaded profile config "scheduled-stop-309868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-309868 -n scheduled-stop-309868
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 08:22:03.168375  737395 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:22:03.168493  737395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:22:03.168498  737395 out.go:374] Setting ErrFile to fd 2...
	I1217 08:22:03.168501  737395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:22:03.168721  737395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:22:03.168981  737395 out.go:368] Setting JSON to false
	I1217 08:22:03.169213  737395 daemonize_unix.go:73] killing process 737278 as it is an old scheduled stop
	I1217 08:22:03.169317  737395 mustload.go:66] Loading cluster: scheduled-stop-309868
	I1217 08:22:03.169771  737395 config.go:182] Loaded profile config "scheduled-stop-309868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:22:03.169865  737395 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/config.json ...
	I1217 08:22:03.170119  737395 mustload.go:66] Loading cluster: scheduled-stop-309868
	I1217 08:22:03.170301  737395 config.go:182] Loaded profile config "scheduled-stop-309868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 08:22:03.176541  556055 retry.go:31] will retry after 147.168µs: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.177709  556055 retry.go:31] will retry after 215.436µs: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.178817  556055 retry.go:31] will retry after 327.759µs: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.179983  556055 retry.go:31] will retry after 306.427µs: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.181120  556055 retry.go:31] will retry after 348.481µs: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.182262  556055 retry.go:31] will retry after 507.828µs: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.183397  556055 retry.go:31] will retry after 1.22488ms: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.185597  556055 retry.go:31] will retry after 1.835156ms: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.187796  556055 retry.go:31] will retry after 2.817878ms: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.191055  556055 retry.go:31] will retry after 3.425501ms: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.195316  556055 retry.go:31] will retry after 7.561732ms: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.203600  556055 retry.go:31] will retry after 8.620948ms: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.212896  556055 retry.go:31] will retry after 17.013808ms: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.230209  556055 retry.go:31] will retry after 24.405967ms: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.255597  556055 retry.go:31] will retry after 19.037007ms: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
I1217 08:22:03.274835  556055 retry.go:31] will retry after 32.99706ms: open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-309868 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-309868 -n scheduled-stop-309868
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-309868
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-309868 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 08:22:29.120964  738085 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:22:29.121064  738085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:22:29.121069  738085 out.go:374] Setting ErrFile to fd 2...
	I1217 08:22:29.121073  738085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:22:29.121268  738085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:22:29.121592  738085 out.go:368] Setting JSON to false
	I1217 08:22:29.121679  738085 mustload.go:66] Loading cluster: scheduled-stop-309868
	I1217 08:22:29.122010  738085 config.go:182] Loaded profile config "scheduled-stop-309868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:22:29.122085  738085 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/scheduled-stop-309868/config.json ...
	I1217 08:22:29.122270  738085 mustload.go:66] Loading cluster: scheduled-stop-309868
	I1217 08:22:29.122380  738085 config.go:182] Loaded profile config "scheduled-stop-309868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
E1217 08:22:37.556464  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-309868
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-309868: exit status 7 (86.96346ms)

                                                
                                                
-- stdout --
	scheduled-stop-309868
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-309868 -n scheduled-stop-309868
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-309868 -n scheduled-stop-309868: exit status 7 (86.28689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-309868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-309868
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-309868: (5.066669104s)
--- PASS: TestScheduledStopUnix (100.76s)

                                                
                                    
x
+
TestInsufficientStorage (11.94s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-691717 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-691717 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.419497874s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"01008a53-ed63-48e2-8c6a-df8e1270b39e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-691717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bca8b593-954a-4d06-a12c-2a52264710a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22182"}}
	{"specversion":"1.0","id":"337f5335-4f13-4fe6-9686-726bf4b00b37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"139a28e9-6e12-40b2-a531-53939a3cc68f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig"}}
	{"specversion":"1.0","id":"e54dd787-1bfb-420c-8ba4-bab1218fcf51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube"}}
	{"specversion":"1.0","id":"b2a8e143-55b9-4255-b530-0ab6240f717a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1e2c39f2-17e6-4202-8046-9c25cc18d854","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8e7154f4-d598-4efe-bf89-6044cfd32f0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ed196f3c-6633-403e-bb2c-20fcc4264818","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fcb12345-6e32-4b62-a874-e29f2a225233","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8643f901-f7a9-431a-889a-b0282f0004d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b5fd1ff0-bc6c-4490-8b6d-11081bcef8fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-691717\" primary control-plane node in \"insufficient-storage-691717\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba825387-4384-4373-9c9f-cafa987b2e61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b491afbf-7616-4254-b342-0039583100a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"694a3001-7cf7-411e-b6c1-faa67e268687","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-691717 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-691717 --output=json --layout=cluster: exit status 7 (295.282251ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-691717","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-691717","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 08:23:29.069493  740625 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-691717" does not appear in /home/jenkins/minikube-integration/22182-552461/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-691717 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-691717 --output=json --layout=cluster: exit status 7 (300.718831ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-691717","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-691717","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 08:23:29.371266  740733 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-691717" does not appear in /home/jenkins/minikube-integration/22182-552461/kubeconfig
	E1217 08:23:29.382114  740733 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/insufficient-storage-691717/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-691717" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-691717
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-691717: (1.924597456s)
--- PASS: TestInsufficientStorage (11.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (307.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2737138334 start -p running-upgrade-201781 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2737138334 start -p running-upgrade-201781 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.590757729s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-201781 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-201781 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m33.586142621s)
helpers_test.go:176: Cleaning up "running-upgrade-201781" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-201781
E1217 08:29:57.584446  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/addons-910958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-201781: (6.5836168s)
--- PASS: TestRunningBinaryUpgrade (307.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (312.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-568559 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-568559 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.185042968s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-568559
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-568559: (2.141335788s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-568559 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-568559 status --format={{.Host}}: exit status 7 (92.955868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-568559 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-568559 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.907918443s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-568559 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-568559 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-568559 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (123.175455ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-568559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-568559
	    minikube start -p kubernetes-upgrade-568559 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5685592 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-568559 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-568559 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-568559 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (9.848003653s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-568559" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-568559
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-568559: (3.891504198s)
--- PASS: TestKubernetesUpgrade (312.30s)

                                                
                                    
x
+
TestMissingContainerUpgrade (100.4s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3156054662 start -p missing-upgrade-442124 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3156054662 start -p missing-upgrade-442124 --memory=3072 --driver=docker  --container-runtime=crio: (53.187925939s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-442124
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-442124: (1.739090157s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-442124
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-442124 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-442124 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.31342834s)
helpers_test.go:176: Cleaning up "missing-upgrade-442124" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-442124
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-442124: (2.58529172s)
--- PASS: TestMissingContainerUpgrade (100.40s)

                                                
                                    
x
+
TestPause/serial/Start (63.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-262039 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-262039 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m3.271738914s)
--- PASS: TestPause/serial/Start (63.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (80.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.943815415 start -p stopped-upgrade-387280 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.943815415 start -p stopped-upgrade-387280 --memory=3072 --vm-driver=docker  --container-runtime=crio: (50.307941813s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.943815415 -p stopped-upgrade-387280 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.943815415 -p stopped-upgrade-387280 stop: (2.552695326s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-387280 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-387280 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.32196312s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (80.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-262039 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-262039 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.778304741s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-387280
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-387280: (1.43064011s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-568878 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-568878 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (112.670014ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-568878] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-568878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-568878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.970418876s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-568878 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-055130 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-055130 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (180.2278ms)

                                                
                                                
-- stdout --
	* [false-055130] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:25:33.586643  776262 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:25:33.586934  776262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:25:33.586944  776262 out.go:374] Setting ErrFile to fd 2...
	I1217 08:25:33.586950  776262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:25:33.587184  776262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-552461/.minikube/bin
	I1217 08:25:33.587715  776262 out.go:368] Setting JSON to false
	I1217 08:25:33.589028  776262 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7679,"bootTime":1765952255,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:25:33.589094  776262 start.go:143] virtualization: kvm guest
	I1217 08:25:33.591396  776262 out.go:179] * [false-055130] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:25:33.593041  776262 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:25:33.593063  776262 notify.go:221] Checking for updates...
	I1217 08:25:33.595849  776262 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:25:33.597311  776262 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-552461/kubeconfig
	I1217 08:25:33.598910  776262 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-552461/.minikube
	I1217 08:25:33.600664  776262 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:25:33.602169  776262 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:25:33.604486  776262 config.go:182] Loaded profile config "NoKubernetes-568878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:25:33.604626  776262 config.go:182] Loaded profile config "kubernetes-upgrade-568559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:25:33.604720  776262 config.go:182] Loaded profile config "running-upgrade-201781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 08:25:33.604812  776262 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:25:33.629562  776262 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 08:25:33.629670  776262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 08:25:33.693495  776262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 08:25:33.683199996 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 08:25:33.693628  776262 docker.go:319] overlay module found
	I1217 08:25:33.696059  776262 out.go:179] * Using the docker driver based on user configuration
	I1217 08:25:33.697428  776262 start.go:309] selected driver: docker
	I1217 08:25:33.697444  776262 start.go:927] validating driver "docker" against <nil>
	I1217 08:25:33.697457  776262 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:25:33.699257  776262 out.go:203] 
	W1217 08:25:33.700557  776262 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1217 08:25:33.701829  776262 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-055130 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-055130" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 08:25:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-568559
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 08:25:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-201781
contexts:
- context:
cluster: kubernetes-upgrade-568559
user: kubernetes-upgrade-568559
name: kubernetes-upgrade-568559
- context:
cluster: running-upgrade-201781
user: running-upgrade-201781
name: running-upgrade-201781
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-568559
user:
client-certificate: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kubernetes-upgrade-568559/client.crt
client-key: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kubernetes-upgrade-568559/client.key
- name: running-upgrade-201781
user:
client-certificate: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/running-upgrade-201781/client.crt
client-key: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/running-upgrade-201781/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-055130

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-055130"

                                                
                                                
----------------------- debugLogs end: false-055130 [took: 3.802669771s] --------------------------------
helpers_test.go:176: Cleaning up "false-055130" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-055130
--- PASS: TestNetworkPlugins/group/false (4.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-568878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-568878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.899347125s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-568878 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-568878 status -o json: exit status 2 (343.476398ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-568878","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-568878
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-568878: (3.309185246s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-568878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-568878 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.080124689s)
--- PASS: TestNoKubernetes/serial/Start (7.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22182-552461/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-568878 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-568878 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.237126ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.260405324s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-568878
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-568878: (1.291500625s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-568878 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-568878 --driver=docker  --container-runtime=crio: (7.206524618s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-568878 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-568878 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.063489ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1217 08:27:37.556330  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.612467089s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-055130 "pgrep -a kubelet"
I1217 08:27:56.399552  556055 config.go:182] Loaded profile config "auto-055130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-055130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-klj8s" [a604b6a1-c8c2-4231-802a-44ebe7ac6de1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-klj8s" [a604b6a1-c8c2-4231-802a-44ebe7ac6de1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003273734s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-055130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.808553278s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-kbzfd" [6c618a55-d592-4df7-ab0c-f333cba2b6d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004141818s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.396114927s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-055130 "pgrep -a kubelet"
I1217 08:29:14.401660  556055 config.go:182] Loaded profile config "kindnet-055130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-055130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-dr58d" [f0c8b8b2-972b-46f9-b24a-13c54343d41b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-dr58d" [f0c8b8b2-972b-46f9-b24a-13c54343d41b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003431584s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-055130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (58.44914401s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m9.736392318s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (55.569427653s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-4w59r" [6419d1b1-57a6-4042-a8e6-f288f24a9bac] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-4w59r" [6419d1b1-57a6-4042-a8e6-f288f24a9bac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004500653s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-055130 "pgrep -a kubelet"
I1217 08:30:13.396271  556055 config.go:182] Loaded profile config "calico-055130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-055130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rrcvl" [1237b4b5-fa5b-4eec-a11f-a98c15194731] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-rrcvl" [1237b4b5-fa5b-4eec-a11f-a98c15194731] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005323397s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-055130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-055130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (41.102151127s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-055130 "pgrep -a kubelet"
I1217 08:30:48.392195  556055 config.go:182] Loaded profile config "custom-flannel-055130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-055130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-sbgnv" [b13f4362-4e45-4a48-8d9e-257f28cd8ddb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-sbgnv" [b13f4362-4e45-4a48-8d9e-257f28cd8ddb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003821698s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-bnflg" [6b9fd170-9b28-4cda-89b6-542b97f17e2a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00500358s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-055130 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-055130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I1217 08:30:59.948842  556055 config.go:182] Loaded profile config "enable-default-cni-055130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-055130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-56qgd" [5e883624-bf38-4c0b-86b2-f4b3a44f212e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-56qgd" [5e883624-bf38-4c0b-86b2-f4b3a44f212e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00490388s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-055130 "pgrep -a kubelet"
I1217 08:31:01.185857  556055 config.go:182] Loaded profile config "flannel-055130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-055130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-4gfm5" [4b061c59-e066-40cb-bca5-e5c2a355976a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-4gfm5" [4b061c59-e066-40cb-bca5-e5c2a355976a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.005235127s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-055130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-055130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (58.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (58.570302717s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (58.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-055130 "pgrep -a kubelet"
I1217 08:31:28.493562  556055 config.go:182] Loaded profile config "bridge-055130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-055130 replace --force -f testdata/netcat-deployment.yaml
I1217 08:31:29.487994  556055 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1217 08:31:29.491509  556055 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-q5bfb" [e35ed68e-aab3-4686-a24a-4da3b53fd898] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-q5bfb" [e35ed68e-aab3-4686-a24a-4da3b53fd898] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004600201s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m1.926063365s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (49.551758783s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-055130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-055130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (46.406412476s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-640910 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0f766262-719c-4660-98c5-17e8294dcee3] Pending
helpers_test.go:353: "busybox" [0f766262-719c-4660-98c5-17e8294dcee3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0f766262-719c-4660-98c5-17e8294dcee3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003788835s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-640910 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-581631 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [8cd6be07-b866-4ffa-92b9-52467bb7e162] Pending
helpers_test.go:353: "busybox" [8cd6be07-b866-4ffa-92b9-52467bb7e162] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [8cd6be07-b866-4ffa-92b9-52467bb7e162] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.006819501s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-581631 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-640910 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-640910 --alsologtostderr -v=3: (16.145995661s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-581631 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-581631 --alsologtostderr -v=3: (16.339330162s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-936988 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [49a0ffc0-df8e-43e8-958f-ecdefc3a9cdc] Pending
helpers_test.go:353: "busybox" [49a0ffc0-df8e-43e8-958f-ecdefc3a9cdc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1217 08:32:37.556049  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/functional-981680/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [49a0ffc0-df8e-43e8-958f-ecdefc3a9cdc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004672418s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-936988 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-936988 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-936988 --alsologtostderr -v=3: (16.403396999s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-225657 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0a4eb68c-1efc-41c5-8d95-7bb4c25b10bc] Pending
helpers_test.go:353: "busybox" [0a4eb68c-1efc-41c5-8d95-7bb4c25b10bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0a4eb68c-1efc-41c5-8d95-7bb4c25b10bc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003930988s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-225657 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-640910 -n old-k8s-version-640910
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-640910 -n old-k8s-version-640910: exit status 7 (92.465074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-640910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-640910 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.937436942s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-640910 -n old-k8s-version-640910
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-581631 -n embed-certs-581631
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-581631 -n embed-certs-581631: exit status 7 (84.03019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-581631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-581631 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (44.822055093s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-581631 -n embed-certs-581631
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-225657 --alsologtostderr -v=3
E1217 08:32:59.177414  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:33:01.739380  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-225657 --alsologtostderr -v=3: (17.619435659s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-936988 -n no-preload-936988
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-936988 -n no-preload-936988: exit status 7 (105.16098ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-936988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (45.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1217 08:33:06.860987  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-936988 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (44.871023074s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-936988 -n no-preload-936988
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (45.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657: exit status 7 (119.522959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-225657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
E1217 08:33:17.102779  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-225657 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (47.910206378s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225657 -n default-k8s-diff-port-225657
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xhcfw" [bfd63338-0d19-477c-95f5-82e2f47d96e4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00347523s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-qvtl9" [eaf0b178-b6e1-417d-8664-8d4f909a1c06] Running
E1217 08:33:37.584713  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/auto-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003718528s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xhcfw" [bfd63338-0d19-477c-95f5-82e2f47d96e4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004002753s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-581631 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-qvtl9" [eaf0b178-b6e1-417d-8664-8d4f909a1c06] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003643511s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-640910 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-581631 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-w6fbt" [dd6624a2-c1cf-4789-831d-bf83e4c0c4c1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004482577s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-640910 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-w6fbt" [dd6624a2-c1cf-4789-831d-bf83e4c0c4c1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00364622s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-936988 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (25.2728139s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-936988 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-z7zjk" [47523a42-5d46-4c4f-be74-564902ca582a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004153487s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-z7zjk" [47523a42-5d46-4c4f-be74-564902ca582a] Running
E1217 08:34:13.214584  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kindnet-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004205466s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-225657 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-225657 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (17.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-441323 --alsologtostderr -v=3
E1217 08:34:28.577784  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kindnet-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-441323 --alsologtostderr -v=3: (17.949373745s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (17.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-441323 -n newest-cni-441323
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-441323 -n newest-cni-441323: exit status 7 (89.293908ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-441323 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1217 08:34:49.059702  556055 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kindnet-055130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-441323 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (9.78120405s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-441323 -n newest-cni-441323
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-441323 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
138 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
373 TestNetworkPlugins/group/kubenet 3.97
381 TestNetworkPlugins/group/cilium 4.46
395 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-055130 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-055130" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 08:25:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-568559
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 08:25:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-201781
contexts:
- context:
cluster: kubernetes-upgrade-568559
user: kubernetes-upgrade-568559
name: kubernetes-upgrade-568559
- context:
cluster: running-upgrade-201781
user: running-upgrade-201781
name: running-upgrade-201781
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-568559
user:
client-certificate: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kubernetes-upgrade-568559/client.crt
client-key: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kubernetes-upgrade-568559/client.key
- name: running-upgrade-201781
user:
client-certificate: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/running-upgrade-201781/client.crt
client-key: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/running-upgrade-201781/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-055130

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-055130"

                                                
                                                
----------------------- debugLogs end: kubenet-055130 [took: 3.76347315s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-055130" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-055130
--- SKIP: TestNetworkPlugins/group/kubenet (3.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-055130 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-055130" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 08:25:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-568878
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 08:25:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-568559
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-552461/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 08:25:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-201781
contexts:
- context:
cluster: NoKubernetes-568878
extensions:
- extension:
last-update: Wed, 17 Dec 2025 08:25:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-568878
name: NoKubernetes-568878
- context:
cluster: kubernetes-upgrade-568559
user: kubernetes-upgrade-568559
name: kubernetes-upgrade-568559
- context:
cluster: running-upgrade-201781
user: running-upgrade-201781
name: running-upgrade-201781
current-context: NoKubernetes-568878
kind: Config
users:
- name: NoKubernetes-568878
user:
client-certificate: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/NoKubernetes-568878/client.crt
client-key: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/NoKubernetes-568878/client.key
- name: kubernetes-upgrade-568559
user:
client-certificate: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kubernetes-upgrade-568559/client.crt
client-key: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/kubernetes-upgrade-568559/client.key
- name: running-upgrade-201781
user:
client-certificate: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/running-upgrade-201781/client.crt
client-key: /home/jenkins/minikube-integration/22182-552461/.minikube/profiles/running-upgrade-201781/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-055130

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-055130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055130"

                                                
                                                
----------------------- debugLogs end: cilium-055130 [took: 4.267536496s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-055130" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-055130
--- SKIP: TestNetworkPlugins/group/cilium (4.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-606497" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-606497
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard